<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Elad Nava]]></title><description><![CDATA[Tech entrepreneur with a passion for life-saving innovation]]></description><link>https://eladnava.com/</link><generator>Ghost 0.7</generator><lastBuildDate>Mon, 28 Apr 2025 13:52:22 GMT</lastBuildDate><atom:link href="https://eladnava.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Send Multicast Notifications using Node.js, HTTP/2 and the FCM HTTP v1 API]]></title><description><![CDATA[<p>With the deprecation and planned decommission of both the <a href="https://firebase.google.com/docs/cloud-messaging/http-server-ref">Legacy HTTP FCM API</a> and the <a href="https://firebase.google.com/support/release-notes/admin/node#cloud-messaging">Firebase Admin SDK Batch Send API</a> (<code>sendMulticast()</code> method), it has become a challenge to send multicast notifications at scale and with low latency to large numbers of devices simultaneously without relying on the deprecated APIs.</p>]]></description><link>https://eladnava.com/send-multicast-notifications-using-node-js-http-2-and-the-fcm-http-v1-api/</link><guid isPermaLink="false">a78a3856-a4dd-4744-ba76-c14511b51d08</guid><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Fri, 30 Jun 2023 08:24:24 GMT</pubDate><media:content url="https://eladnava.com/content/images/2023/06/nodejs-6.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2023/06/nodejs-6.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2023/06/nodejs-6.jpg" alt="Send Multicast Notifications using Node.js, HTTP/2 and the FCM HTTP v1 API"><p>With the deprecation and planned decommission of both the <a href="https://firebase.google.com/docs/cloud-messaging/http-server-ref">Legacy HTTP FCM API</a> and the <a href="https://firebase.google.com/support/release-notes/admin/node#cloud-messaging">Firebase Admin SDK Batch Send API</a> (<code>sendMulticast()</code> method), it has become a challenge to send multicast notifications at scale and with low latency to large numbers of devices simultaneously without relying on the deprecated APIs. </p>

<p>Since the new <a href="https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages/send">FCM HTTP v1 API</a> only accepts  a single device token per request, we would essentially have to send an entire HTTP request per device token, per notification. At scale, sending to large numbers of devices (1M) would be extremely slow over HTTP 1.0 / 1.1 and require lots of TCP connections.</p>

<p>I was surprised to discover that the official <a href="https://www.npmjs.com/package/firebase-admin">Firebase Admin Node.js SDK</a> had been updated to include a new method <code>sendEachForMulticast()</code>, and developers are urged to migrate to using it instead of the now deprecated <code>sendMulticast()</code>. However, if you were to use this method with any large number of device tokens, your server might crash or grind to a halt, as the Firebase Admin SDK opens a new HTTP 1.0 connection for every single device token simultaneously.</p>

<p>Thankfully, the new FCM HTTP v1 API also supports HTTP/2 connectivity. With HTTP/2, a single connection (referred to as a session) can be kept open and multiple requests (referred to as streams) can be simultaneously sent over this single connection. The FCM HTTP v1 API allows up to 100 simultaneous streams per session, and we are free to open multiple HTTP/2 sessions to improve concurrency and deliver the notification ASAP to as many devices as possible.</p>

<p>Strangely enough, the Firebase Admin Node.js SDK has not been updated to utilize HTTP/2 under the hood for its multicast functionality. So I took matters into my own hands and <a href="https://www.npmjs.com/package/fcm-v1-http2">published a package</a> to <code>npm</code> that does just that.</p>

<p>First, install the package using npm:  </p>

<pre><code>npm install fcm-v1-http2 --save  
</code></pre>

<p>Then, start using the package by importing and instantiating it:  </p>

<pre><code>const fcmV1Http2 = require('fcm-v1-http2');

// Create a new client
const client = new fcmV1Http2({  
  // Pass in your service account JSON private key file (https://console.firebase.google.com/u/0/project/_/settings/serviceaccounts/adminsdk)
  serviceAccount: require('./service-account.json'),
  // Max number of concurrent HTTP/2 sessions (connections)
  maxConcurrentConnections: 10,
  // Max number of concurrent streams (requests) per session
  maxConcurrentStreamsAllowed: 100
});

// Populate array with any number of FCM device tokens
const tokens = ['ccw_syAXSNOY9ml-Kqh9wo:APA91bHAEQccW1ZpbPvsGc0LFyjEthAt_GZO7HkBGiKounM................uIDEHijb4UR5f3dhyjhO5IbiWhJAA7RVp63KSFCg384PR7nfKADReWUONEJlCnHo15WwZagVTmFcgW'];

// Set FCM API v1 message params
// https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages#Message
const message = {  
    data: {
        // Set custom payload
        message: 'Hello World'
    },
    android: {
        // Burst through Doze mode
        priority: 'high'
    }
};

// Send the notification
client.sendMulticast(message, tokens).then((unregisteredTokens) =&gt; {  
    // Sending successful
    console.log('Message sent successfully');

    // Remove unregistered tokens from your database
    if (unregisteredTokens.length &gt; 0) {
        console.log('Unregistered device token(s): ', unregisteredTokens.join(', '));
    }
}).catch((err) =&gt; {
    // Sending failed
    // Log error to console
    console.error('Sending failed:', err);
});
</code></pre>

<p>Let me know what you think, and if you faced any issues using the package!</p>]]></content:encoded></item><item><title><![CDATA[Get Facebook Ad Lead Notifications with Node.js & Webhooks]]></title><description><![CDATA[<p>I recently finished setting up a Lead Generation ad on Facebook Ads, which is a special type of ad that asks a potential customer to leave their details for you to get in touch with them. The process looks something like this, and Facebook pre-fills any information they already have</p>]]></description><link>https://eladnava.com/get-facebook-ad-lead-notifications-in-realtime-with-node-js-webhooks/</link><guid isPermaLink="false">40fc94b3-8079-4d52-8b63-6f1ecd620f68</guid><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Wed, 21 Jul 2021 18:01:18 GMT</pubDate><media:content url="https://eladnava.com/content/images/2021/07/fb.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2021/07/fb.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2021/07/fb.jpg" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"><p>I recently finished setting up a Lead Generation ad on Facebook Ads, which is a special type of ad that asks a potential customer to leave their details for you to get in touch with them. The process looks something like this, and Facebook pre-fills any information they already have about your customer:</p>

<p><img src="https://eladnava.com/content/images/2021/07/fb_lead_ads-example-768x501.jpg" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>So far, so good. For the advertiser, though, to actually get access to the incoming leads (and the submitted form info), you can either:</p>

<ol>  
<li>Connect to a CRM of choice ($$) or a tool such as Zapier (also $$)  </li>  
<li>Manually go into the <a href="https://web.archive.org/web/20210224171241/https://business.facebook.com/">Facebook Business Suite</a> -&gt; Select Your Account -&gt; More Tools -&gt; Instant Forms and click <strong>Download</strong> to fetch a <code>.csv</code> file with any new leads every time you get a new lead (by manually checking the page for new leads)  </li>  
<li>Subscribe to the <code>leadgen</code> webhook which is invoked every time a lead submits the form</li>  
</ol>

<p>Since time is of the essence with most leads, and since I chose to go with the free route, option 3 made the most sense. Yet actually getting it to work was the tricky part. Here's a detailed step-by-step guide on how to do just that:</p>

<h3 id="createafacebookapp">Create a Facebook App</h3>

<ol>  
<li><p><a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/">Sign up for Facebook Developer access</a> if you haven't already, from the same account as your Facebook Ads advertising account.</p></li>  
<li><p><a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/apps/">Create an app</a> and select <strong>Manage business integrations</strong> as the app purpose. If you're using Facebook Business Manager, make sure to select the right Business Manager account in the app creation form, when asked to do so.</p></li>  
</ol>

<blockquote>  
  <p><strong>Note:</strong> You will need to <a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/docs/app-review/">submit your app to Facebook for review</a> to enable Live mode, otherwise you won't be able to access real leads but only test leads.</p>
</blockquote>

<h3 id="generateaneverexpiringpageaccesstoken">Generate a Never-Expiring Page Access Token</h3>

<ol>  
<li><a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/tools/explorer/">Visit the Graph API Explorer</a> and add the click <strong>Generate Access Token</strong>:</li>  
</ol>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-28-59-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;2. Add the following custom permissions which are necessary to retrieve Facebook Ads leads:</p>

<pre>
pages_show_list
ads_management
ads_read
leads_retrieval
pages_read_engagement
pages_manage_metadata
pages_manage_ads
</pre>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-13-at-12-21-50-AM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;3. Click <strong>Generate Access Token</strong> again and select your page, granting all permissions:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-32-28-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;4. Under <strong>User or Page</strong>, select the name of your page, ensure the permissions are still there, and click <strong>Generate Access Token</strong>:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-37-49-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>Accept the permission dialog once again.</p>

<p>&nbsp;&nbsp;&nbsp;5.  Most likely, the page you selected under <strong>User or Page</strong>, will have reset back to User. <strong>Select your page again</strong>, and click <strong>Generate Access Token</strong>:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-37-49-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;6. Now, let's extend the Access Token's expiration date. Copy the <strong>Access Token</strong> and paste it into the <a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/tools/debug/accesstoken/">Access Token Debugger</a>, and click <strong>Debug</strong>. By default, page access tokens expire in an hour, but we can extend them indefinitely.</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-49-04-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;7. To extend the access token, scroll down and click the <strong>Extend Access Token</strong> button at the end of the page:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-35-16-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;8. Click <strong>Debug</strong> and ensure the expiration is set to <strong>Never</strong>:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-10-41-53-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>You have now generated an access token that never expires, which we can use to fetch leads programmatically and indefinitely, without having to bother with re-generating an access token ever again for this purpose.</p>

<h3 id="settingupthenodejsserver">Setting Up the Node.js Server</h3>

<p>Now, the easy part: let's set up a basic web server to receive the webhook requests from Facebook every time a new lead submits their information.</p>

<ol>  
<li>Install Express (web server), Axios (outgoing HTTP request library) and Body Parser (for parsing requests with JSON):</li>  
</ol>

<pre><code>npm install express axios body-parser --save  
</code></pre>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2. Create a file <code>server.js</code> and paste in the <a href="https://web.archive.org/web/20210224171241/https://gist.github.com/eladnava/638282a87760c462e2d11b0926770685">contents of this Gist</a>.</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3. Paste the <strong>Page Access Token</strong>  from the previous step into the <code>FACEBOOK_PAGE_ACCESS_TOKEN</code> variable.</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4. Run the server:</p>

<pre><code>node server.js  
</code></pre>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5. Let's test out the webhook functionality with <a href="https://web.archive.org/web/20210224171241/https://ngrok.com/">ngrok</a>, which will create a publicly-accessible hostname that will tunnel traffic to your local Node app without having to deploy it remotely, by running:</p>

<pre><code>npx ngrox http 3000  
</code></pre>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;6. Copy the <code>https</code> forwarding address from the output:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-11-33-13-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<h3 id="enablingtheleadgenwebhook">Enabling the Lead Gen Webhook</h3>

<ol>  
<li>Visit the <a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/apps/">Facebook Developer Center</a>, select your app, scroll down to <strong>Webhooks</strong> and click <strong>Set Up</strong>.</li>  
</ol>

<p><strong>Callback URL</strong>: Paste in the <code>https</code> forwarding address from <code>ngrok</code>, followed by <code>/webhook</code></p>

<p><strong>Verify Token</strong>: Enter <code>CUSTOM_WEBHOOK_VERIFY_TOKEN</code> (you may change this string for security purposes, but also remember to modify it in <code>server.js</code> accordingly).</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-11-14-00-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;3. Click <strong>Verify and Save</strong>.</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;4. Search for the <code>leadgen</code> webhook, and click <strong>Test</strong> next to it, followed by <strong>Send to My Server</strong>. Observe the terminal which is running <code>server.js</code> for the following error output:</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-13-at-12-28-06-AM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<p>This error is expected in this case, as Facebook has sent us an invalid Lead ID (<code>444444444444</code>). For now, ignore the error and click <strong>Subscribe</strong> to receive <code>leadgen</code> webhook events.</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;3. We now need to manually subscribe for the <code>leadgen</code> webhook event for the specific page we are advertising. Execute the following <code>curl</code> command in your terminal of choice, replacing <code>INSERT_PAGE_ID</code> with your Page ID (from your <a href="https://web.archive.org/web/20210224171241/https://www.facebook.com/pages">Facebook Page URL</a>) and <code>INSERT_PAGE_ACCESS_TOKEN</code> with the Page Access Token (which you generated previously).</p>

<pre><code>curl -X POST 'https://graph.facebook.com/v2.5/INSERT_PAGE_ID/subscribed_apps?access_token=INSERT_PAGE_ACCESS_TOKEN' \
  -H 'Content-Type: application/json' \
  -d 'subscribed_fields=leadgen'
</code></pre>

<p>Check for a <code>{"success": true}</code> response.</p>

<h3 id="timetotest">Time to Test</h3>

<p>Finally, let's make sure everything works!</p>

<p>Facebook provides a nifty <a href="https://web.archive.org/web/20210224171241/https://developers.facebook.com/tools/lead-ads-testing">Lead Ads Testing Tool</a> which will invoke our webhook with a test lead. Select your Page and Form from the respective dropdown and click <strong>Create lead</strong>. If all went well, the Node.js console output should show the lead details!</p>

<p><img src="https://eladnava.com/web/20210224171241im_/https://eladnava.com/content/images/2020/11/Screen-Shot-2020-11-12-at-11-44-42-PM.png" alt="Get Facebook Ad Lead Notifications with Node.js & Webhooks"></p>

<h2 id="nextsteps">Next Steps</h2>

<p>To make this script actually notify you when a new lead submits their information, a final step is required to send out an e-mail or some other kind of notification to alert the relevant salesperson of the new lead, with the information entered from the form. I recommend using <a href="https://web.archive.org/web/20210224171241/https://nodemailer.com/about/">nodemailer</a> for this, and <a href="https://web.archive.org/web/20210224171241/https://aws.amazon.com/ses/">AWS SES</a> as the SMTP e-mail sending service. Check the sample code at the end of <code>server.js</code> for the <code>nodemailer</code> approach.</p>

<p>Finally, when you actually <a href="https://web.archive.org/web/20210224171241/https://eladnava.com/deploying-resilient-node-js-apps-with-forever-and-nginx/">deploy this node app on a server</a>, make sure to update the <strong>Callback URL</strong> in the Webhook Settings for your Facebook App to the server's address instead of the temporary <code>ngrok</code> address.</p>

<p>Let me know if you found this useful, and have any questions :)</p>]]></content:encoded></item><item><title><![CDATA[Deploy a Dynamic DNS Load Balancer with Node.js]]></title><description><![CDATA[<p>There are several well-documented approaches to load balancing large amounts of traffic to your service. The most common involves using <code>nginx</code> or <code>apache</code> as a reverse-proxy to load-balance connections in a round-robin or least-concurrent-connections fashion.</p>

<p>Another common method is to use a load-balancing service such as <a href="https://aws.amazon.com/elasticloadbalancing/">AWS Elastic Load Balancer</a></p>]]></description><link>https://eladnava.com/deploy-a-dynamic-dns-service-on-node-js/</link><guid isPermaLink="false">31692475-07f6-4e94-b514-37f486a430f3</guid><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Fri, 31 Jul 2020 15:46:53 GMT</pubDate><media:content url="https://eladnava.com/content/images/2020/07/js-6.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2020/07/js-6.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2020/07/js-6.jpg" alt="Deploy a Dynamic DNS Load Balancer with Node.js"><p>There are several well-documented approaches to load balancing large amounts of traffic to your service. The most common involves using <code>nginx</code> or <code>apache</code> as a reverse-proxy to load-balance connections in a round-robin or least-concurrent-connections fashion.</p>

<p>Another common method is to use a load-balancing service such as <a href="https://aws.amazon.com/elasticloadbalancing/">AWS Elastic Load Balancer</a>. Behind the scenes, AWS runs servers that accept connections from your clients and forward them to your own backend servers for processing. The AWS service tracks various metrics behind-the-scenes to decide which backend server to forward incoming requests to. <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html">Based on the docs</a>, with Classic Load Balancers, the node that receives the request selects a registered instance as follows:</p>

<ul>
<li>It uses the <a href="https://avinetworks.com/glossary/round-robin-load-balancing/">round robin routing algorithm</a> for TCP listeners</li>
<li>It uses the least outstanding requests routing algorithm for HTTP and HTTPS listeners</li>
</ul>

<p>While this approach to load-balancing can work seamlessly for generally short-lived connections such as HTTP requests, with long-lived TCP sockets, a round-robin routing algorithm may not balance traffic equally among your backend instances in some scenarios, such as when you spin up a new backend instance as you scale. In this scenario, it could take a while until the new instance is hosting the same number of long-lived TCP connections as the others, which may have already reached their connection limit.</p>

<p>Another drawback to using a service such as AWS ELB for load-balancing long-lived TCP connections is the added cost and compute power for routing these connections. With AWS ELB, each client connection is kept open as it is being load-balanced to your own backend servers for processing. Therefore, each client connection to your service essentially creates two connections: one at the ELB level, and one on your own servers. Furthermore, AWS ELB can become costly, as it also charges based on the <br>
amount of data transferred through the load balancer to your backend instances.</p>

<h3 id="enterdynamicdnsloadbalancing">Enter Dynamic DNS Load Balancing</h3>

<p>Dynamic DNS load balancing can solve both these drawbacks, quite seamlessly. With traditional DNS mentality, we're used to defining a static host file which maps various records to static endpoints (IPs/hostnames/etc). But what if we could control the DNS resolver response on a per-request level? We could essentially load-balance traffic at the DNS level, routing each client to a backend instance based on our own criteria, such as routing to the server with the least number of established connections.</p>

<p>We face two obstacles:</p>

<p>1) Most DNS services (including <a href="https://aws.amazon.com/route53/">AWS Route 53</a>) don't support implementing your own routing logic, nor executing dynamic code before returning a DNS response.</p>

<p>2) Client machines and ISPs like to cache a DNS response for a given hostname, sometimes for longer than the <code>TTL</code> value on that DNS record.</p>

<h3 id="settingupadynamicdnsserver">Setting up a Dynamic DNS Server</h3>

<p>The first obstacle is pretty easy to solve with Node.js -- it turns out there's plenty of DNS server implementations, which let you handle each client request individually, and return a dynamic, load-balanced response, instead of a fixed one.</p>

<p>Unfortunately, most of the Node.js packages I tested were either outdated or broken, returning invalid DNS responses (extra/missing bytes in the response) or not supporting the common record types, or error codes such as <code>REFUSED</code> or <code>NXDOMAIN</code>.</p>

<p>Finally, I came across <a href="https://www.npmjs.com/package/mname"><code>mname</code></a> which validated across most DNS server tests I performed and allowed for per-request response processing:</p>

<pre><code>var named = require('mname');  
var server = named.createServer();

// In production, run the server on TCP/UDP port 53\
// or use iptables to forward traffic
var port = 9000;

// Listen on TCP
server.listenTcp({ port: port, address: '::' });

// Listen on UDP
server.listen(port, '::', function() {  
  console.log('DNS server started on TCP/UDP port ' + port);
});

server.on('query', function(query, done) {  
  // Extract query hostname and set default TTL to 60 seconds
  var name = query.name(), ttl = 60;

  // Log incoming DNS query for debugging purposes
  console.log('[DNS] %s IN %s', query.name(), query.type());

  // Your backend IPs
  var serverIPs = ['1.2.3.4', '8.8.4.4', '8.8.8.8'];

  // Select one randomly (modify based on your own routing algorithm)
  var result = serverIPs[Math.floor(Math.random() * serverIPs.length)];

  // Load-balance DNS queries (A record) for "api.example.com"
  if (query.type() === 'A' &amp;&amp; name.toLowerCase().endsWith('api.example.com')) {
     // Respond with load-balanced IP address
     query.addAnswer(name, new named.ARecord(result), ttl);
  }
  else {
    // NXDOMAIN response code (unsupported query name/type)
    query.setError('NXDOMAIN');
  }

  // Send response back to client
  server.send(query);
});
</code></pre>

<p>Run the sample code (after running <code>npm install mname</code>) and you have got yourself a dynamic DNS server running in Node.js. </p>

<p>Go ahead and test it out with <code>dig</code>:</p>

<pre><code>dig @127.0.0.1 -p 9000 api.example.com  
</code></pre>

<p>The result should look as follows:</p>

<p><img src="https://eladnava.com/content/images/2020/07/Screen-Shot-2020-07-31-at-11-16-44-AM.png" alt="Deploy a Dynamic DNS Load Balancer with Node.js"></p>

<p>If you run the <code>dig</code> command multiple times, you'll notice a random result returned for every query. </p>

<p>Now, all you need to do is write custom code to return the IP address of one of your backend servers to handle the client's request/connection, based on your own load balancing logic. </p>

<p>For example, you might maintain a database table with each backend server and its current number of established connections. Your DNS server should regularly query this table and pick the endpoint that recently reported the least number of established connections.</p>

<blockquote>
  <p><strong>Pro tip:</strong> It is super important to call <code>.toLowerCase()</code> on the incoming <code>query.name()</code> when validating it against <code>api.example.com</code>, as some DNS clients / ISPs like to use <a href="https://isc.sans.edu/forums/diary/Use+of+Mixed+Case+DNS+Queries/12418/">mixed case to query your DNS server</a> for added DNS security, which means <code>api.example.com</code> can become <code>aPi.ExAmPLE.coM</code>. But do make sure to return the mixed case query name back to the requesting client in your DNS response answer.</p>
</blockquote>

<p>And just like that, you have implemented yourself a dynamic DNS load balancer. But how do we work around the unforgiving DNS caching issue performed by ISPs?</p>

<h3 id="ananticacheworkaround">An Anti-cache Workaround</h3>

<p>We know that multiple queries for the same hostname, <code>api.example.com</code>, may result in caching, thereby hindering our efforts to load-balance every single connection to our service. A clever way to work around this is by placing a random number, or the current Unix timestamp, as a prefix to our hostname:</p>

<pre><code>1596207957-api.example.com  
</code></pre>

<p>If clients try to connect to a new hostname every time they want to establish a connection, it would never have been cached previously, and therefore your DNS service can return an uncached, fresh response, every single time.</p>

<p>The sample code already supports this anti-cache mechanism using the <code>.endsWith()</code> function:</p>

<pre><code>if (name.toLowerCase().endsWith('api.example.com')) {}  
</code></pre>

<p>Now all that's left is to implement your routing algorithm and deploy this DNS service as a cluster. When deploying a DNS cluster, it is recommended to run 4 DNS servers with different IP addresses, each listening on TCP/UDP port 53 for incoming DNS requests, and assign each of them a hostname in another domain, such as <code>ns1.example.io</code>, <code>ns2.example.io</code>, and so on. </p>

<p>Refer to my <a href="https://eladnava.com/deploying-resilient-node-js-apps-with-forever-and-nginx/">Deploy Resilient Node.js Apps with Forever</a> (skip the <code>nginx</code> part) to run your Node.js DNS server with <code>forever</code>, and refer to my <a href="https://eladnava.com/binding-nodejs-port-80-using-nginx/">Binding a Node.js App to Port 80</a> guide (specifically, the <code>iptables</code> section), tweaking the command to reroute traffic on incoming port <code>53</code> to your DNS server listening on port <code>9000</code>:</p>

<pre><code>sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 53 -j REDIRECT --to-port 9000  
sudo iptables -A PREROUTING -t nat -i eth0 -p udp --dport 53 -j REDIRECT --to-port 9000  
</code></pre>

<p>For your DNS servers to actually start routing queries, you need to set the target domain name's <strong>nameservers</strong> (such as as <code>example.com</code>) in the domain name control panel to those 4 servers' hostnames so that clients start querying your DNS servers when trying to resolve <code>api.example.com</code>.</p>

<p>Hope you found this useful, and let me know if you have any questions in the comments!</p>]]></content:encoded></item><item><title><![CDATA[InStock - Essentials Map for the COVID-19 Pandemic]]></title><description><![CDATA[<p>There I was, staring at yet another empty shelf in yet another pharmacy, as I was attempting to buy a digital thermometer and some fever reducing medicine for my family. Being told time and time again by store reps that these essential items are currently out of stock and I</p>]]></description><link>https://eladnava.com/instock-essentials-locator-for-the-covid-19-pandemic/</link><guid isPermaLink="false">e9186fee-8fca-482d-b492-d3d817de97b9</guid><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Mon, 06 Apr 2020 05:15:56 GMT</pubDate><media:content url="https://eladnava.com/content/images/2020/04/instock-2.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2020/04/instock-2.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2020/04/instock-2.jpg" alt="InStock - Essentials Map for the COVID-19 Pandemic"><p>There I was, staring at yet another empty shelf in yet another pharmacy, as I was attempting to buy a digital thermometer and some fever reducing medicine for my family. Being told time and time again by store reps that these essential items are currently out of stock and I should look elsewhere.</p>

<p><img src="https://eladnava.com/content/images/2020/04/d41586-020-00307-x_17619376.jpg" alt="InStock - Essentials Map for the COVID-19 Pandemic"></p>

<p>Essentials stock shortages have unfortunately become a reality for most of us during the COVID-19 / Coronavirus pandemic. Emergency essentials we desperately need have gone out-of-stock in many stores due to panic buying, hoarding, and the sudden surge in demand. </p>

<p>Yet there are still stores with these essentials in stock in our cities. But visiting every single pharmacy &amp; supermarket in a city to check if an item is in stock is a tedious, if not impossible task that may contribute to further contagion of Coronavirus.</p>

<p>There has to be a better way.</p>

<h3 id="meetamy">Meet Amy 🤖</h3>

<p>Amy is an A.I. virtual assistant (<a href="https://www.youtube.com/watch?v=D5VN56jQMWM">think Google Duplex</a>) that calls up pharmacies &amp; supermarkets in a given city, asking store reps to confirm stock of essentials, such as:</p>

<ol>
<li>Pain relief / fever reducing medicine  </li>
<li>Cough medicine  </li>
<li>Digital thermometers  </li>
<li>Hand sanitizer  </li>
<li>Antibacterial soap  </li>
<li>Bottled water  </li>
<li>Canned food  </li>
<li>Pasta/rice</li>
</ol>

<p>Listen to 1 of 1,000 real calls Amy 🤖 made in London to confirm stock: <br>
<a href="https://go.aws/2JqVDLB">https://go.aws/2JqVDLB</a></p>

<p>Here's a transcript of that call: </p>

<blockquote>
  <p><strong>Store Rep:</strong> Hello Sainsbury's local Stannis speaking how can I help?</p>
  
  <p><strong>Amy:</strong> Hi. In an effort to help citizens stock up on emergency supplies during the Coronavirus pandemic, we need your help to check the availability of some items at your store. </p>
  
  <p><strong>Amy:</strong> Please answer the following questions with a yes or no.</p>
  
  <p><strong>Amy:</strong> Do you still have pain medication such as ibuprofen in stock?</p>
  
  <p><strong>Store Rep:</strong> No</p>
  
  <p><strong>Amy:</strong> Do you have cough medication?</p>
  
  <p><strong>Store Rep:</strong> No
  ...</p>
</blockquote>

<p>Once the stock status from various stores in a city has been collected, it is made available on the InStock website. </p>

<h3 id="instock">InStock</h3>

<p>On <a href="https://instock.app/">instock.app</a>, visitors are presented with a map of pharmacies and supermarkets that have recently confirmed essentials in stock in their area:</p>

<p><img src="https://eladnava.com/content/images/2020/04/x3.png" alt="InStock - Essentials Map for the COVID-19 Pandemic"></p>

<p>Stores that stock essentials are highlighted in green, where stores without any items are greyed out.</p>

<p>Clicking the stores reveals exactly which items are in/out of stock, as well as when the stock status was last updated, and directions to get to this store:</p>

<p><img src="https://eladnava.com/content/images/2020/04/Screen-Shot-2020-03-15-at-8-20-15-PM-1.png" alt="InStock - Essentials Map for the COVID-19 Pandemic"></p>

<h3 id="supportthisinitiative">Support this initiative</h3>

<p><a href="https://instock.app/">InStock is currently live in London</a>, and I need your support to expand to more cities with essentials shortages. Calling up thousands of pharmacies and supermarkets in every major city on a daily basis incurs large costs, and these costs only grow with the number of cities supported.</p>

<p>If you are in the position to do so, please donate to support this venture: <br>
<a href="https://paypal.me/instockapp">https://paypal.me/instockapp</a></p>]]></content:encoded></item><item><title><![CDATA[Enabling Enhanced Networking for Ubuntu EC2 Instances]]></title><description><![CDATA[<p>AWS has (quite a while ago) released EC2 instance families (such as <code>c5</code>, <code>m5</code>, and <code>t3</code>) that run a customized version of the Linux kernel, with some AWS goodies and performance improvements baked in, namely <code>ena</code> (Enhanced Networking) and <code>nvme</code> (Non-Volatile Memory).</p>

<p>If you ever tried to change the instance</p>]]></description><link>https://eladnava.com/enabling-ena-support-for-your-ubuntu-instances/</link><guid isPermaLink="false">c039ec88-2fe3-4d72-9abe-2a6e9167bb83</guid><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Tue, 12 Nov 2019 22:05:12 GMT</pubDate><media:content url="https://eladnava.com/content/images/2019/11/aws-1.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2019/11/aws-1.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2019/11/aws-1.jpg" alt="Enabling Enhanced Networking for Ubuntu EC2 Instances"><p>AWS has (quite a while ago) released EC2 instance families (such as <code>c5</code>, <code>m5</code>, and <code>t3</code>) that run a customized version of the Linux kernel, with some AWS goodies and performance improvements baked in, namely <code>ena</code> (Enhanced Networking) and <code>nvme</code> (Non-Volatile Memory).</p>

<p>If you ever tried to change the instance type of your old Ubuntu instances to one of these new instance families, you were probably presented with an error that your instance does not support Enhanced Networking:</p>

<p><img src="https://eladnava.com/content/images/2019/11/error.png" alt="Enabling Enhanced Networking for Ubuntu EC2 Instances"></p>

<p>The <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html#enhanced-networking-ena-ubuntu">AWS documentation</a> seems to cover this topic quite well at first glance, but soon enough I noticed that there seems to be a fundamental mistake in the command mentioned in the docs for enabling ENA support:</p>

<pre><code>sudo apt-get upgrade -y linux-aws   # Don't run this  
</code></pre>

<p>It seems the AWS team was under the impression that running this command would only update/install a single package called <code>linux-aws</code>. In fact, quite to the contrary, running <code>sudo apt-get upgrade</code> will upgrade every single installed package on your machine to its latest version. This could potentially break your server and application especailly if you aren't prepared for it. Plus, there isn't a reason to suggest updating all the packages if all we're wanting to do is add ENA support.</p>

<p>The reality is that it's possible to enable Enhanced Networking and <code>nvme</code> support on your old Ubuntu instance by simply installing the <code>linux-aws</code> package as follows:</p>

<pre><code>sudo apt-get install linux-aws -y  
</code></pre>

<blockquote>
  <p><strong>Note:</strong> Please create a snapshot / AMI of your instance before running any of the commands on this page.</p>
</blockquote>

<p>Then, modify the <code>initramfs</code> module list by running this command:  </p>

<pre><code>sudo nano /etc/initramfs-tools/modules  
</code></pre>

<p>Add <code>nvme</code> to the bottom of this file and save.</p>

<hr>

<p>Finally, run the following command to update the <code>initramfs</code>:</p>

<pre><code>sudo update-initramfs -u -k all  
</code></pre>

<p>Reboot your system for changes to take effect:</p>

<pre><code>sudo reboot  
</code></pre>

<hr>

<p>If you'd like to check whether both <code>ena</code> and <code>nvme</code> are successfully installed and loaded, you can use the following AWS provided script:</p>

<pre><code>wget https://gist.githubusercontent.com/eladnava/1249b10c9f90c144f0d6a0fe01d93066/raw/3a82db14b90273a0b6e024114fe3ce8c4d0e345f/c5_m5_checks_script.sh  
chmod +x c5_m5_checks_script.sh  
sudo ./c5_m5_checks_script.sh  
</code></pre>

<p>Output should be identical to the following:</p>

<pre><code>------------------------------------------------

OK     NVMe Module is installed and available on your instance


OK     ENA Module with version 2.0.3K is installed and available on your instance


OK     fstab file looks fine and does not contain any device names. 

------------------------------------------------
</code></pre>

<p>If indeed the output is identical, you are now ready to shut down your instance and enable the <code>ena-support</code> flag using the <code>aws-cli</code>.</p>

<ul>
<li>Stop the instance in the AWS console</li>
<li>Get the instance ID from the AWS console</li>
<li>Plug it into the following command (replace <code>i-ABCDEFG</code> with your instance ID)</li>
</ul>

<pre><code>aws ec2 modify-instance-attribute --instance-id i-ABCDEFG --ena-support  
</code></pre>

<p>If you have not yet installed/configured the <code>aws-cli</code>, please refer to <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html">these instructions</a>.</p>

<p>Once this command executes without any error, you are now ready to modify your instance type to the desired ENA-enabled instance family, such as <code>c5</code>, <code>m5</code> or <code>t3</code>, in the AWS EC2 console.</p>

<hr>

<p>That's it! Hope I helped avoid a catastrophe and get your old-but-reliable instances running on a new instance family.</p>]]></content:encoded></item><item><title><![CDATA[Manage Files on Your Google Pixel Like a Boss with Pixelmate]]></title><description><![CDATA[<p>The original Google Pixel is an awesome phone. The Google Pixel 2? A little less, if you ask me. It's no secret that the XL version is plagued with a plethora of problems: a blue tint on the display and permanent pixel ghosting, to name a few. The pixel ghosting</p>]]></description><link>https://eladnava.com/manage-files-on-your-google-pixel-like-a-boss-with-pixelmate/</link><guid isPermaLink="false">d532f6b3-3902-4d72-8e16-ea02ffecb120</guid><category><![CDATA[Android]]></category><category><![CDATA[Google]]></category><category><![CDATA[Electron]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Wed, 31 Jan 2018 03:18:16 GMT</pubDate><media:content url="https://eladnava.com/content/images/2018/01/pixel.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2018/01/pixel.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2018/01/pixel.jpg" alt="Manage Files on Your Google Pixel Like a Boss with Pixelmate"><p>The original Google Pixel is an awesome phone. The Google Pixel 2? A little less, if you ask me. It's no secret that the XL version is plagued with a plethora of problems: a blue tint on the display and permanent pixel ghosting, to name a few. The pixel ghosting was improved with a software update (or hack, actually, as a workaround for the hardware issue). And the non-XL version is just plain ugly for this day and age, especially for its price tag.</p>

<p>Which is why I stuck with the first Google Pixel, and I'm loving it. The camera sensor is absolutely spectacular and some shots are almost indistinguishable from ones taken with a fancy DSLR:</p>

<p><img src="https://eladnava.com/content/images/2018/01/26151247_1778954385741385_979745962215866368_n.jpg" alt="Manage Files on Your Google Pixel Like a Boss with Pixelmate"></p>

<p>Yes, the image has been edited, but you still need a great camera sensor for it to look this good, and the Pixel delivers.</p>

<h2 id="theissueathand">The Issue at Hand</h2>

<p>One thing bothers me so much about the Pixel that I decide to spend hours on a workaround: the infamous <a href="https://9to5google.com/2017/01/13/lots-of-people-are-having-trouble-transferring-files-between-mac-and-google-pixel-w-android-file-transfer/">Android File Transfer bug</a>. This is a bug in the file transfer utility that Google has provided for macOS users which makes it impossible to manage files on your Google Pixel, causing most transfers to/from the phone to fail or freeze. And Google apparently does not seem to care much (<a href="https://www.androidauthority.com/google-pixel-mac-android-file-transfer-problems-743068/">[1]</a> <a href="https://productforums.google.com/forum/#!topic/phone-by-google/TowN85s45Qg">[2]</a>).</p>

<p>Why does Google even need to provide a file transfer utility for Android devices on macOS? Probably because Apple is reluctant to provide drivers for Android devices out of the box, complicating Android file management and possibly improving iOS device adoption rates due to the additional step involved in managing files on your Android phone. </p>

<p>This bug is indeed only experienced by some people that use a Mac to manage their Pixel's files. The exact manifestation requirements are still unknown, but the Internet is filled with people complaining about the issue.</p>

<h5 id="possibleworkarounds">Possible Workarounds</h5>

<p>Naturally, I start to investigate some possible workarounds. One is to use an app like AirDrop to wirelessly transfer files over a Wi-Fi network. However, that app requires a vast array of prying permissions for its other features, such as display mirroring, and most importantly, transfer speeds are quite slow in comparison to a wired USB-C transfer.</p>

<p>Another option is to use the USB-to-USB-C adapter that Google ships with Google Pixel phones. This apparently works around the issue in Android File Transfer and transfers work fine. But that requires your Mac to have a USB-C input, which mine does not, having acquired it in early 2015. And it requires you to carry that dongle everywhere, and make sure you don't ever lose it.</p>

<p>The final option that comes to mind is using ADB, the <a href="https://developer.android.com/studio/command-line/adb.html">Android Debug Bridge</a>. ADB comes with useful commands like <code>adb push</code> and <code>adb pull</code> which can be used to both upload and download files to/from your device. This approach successfully works around the bug present in Android File Transfer, but it's not very convenient, as you have to manage your files via the CLI, typing commands in the macOS Terminal. This, in my opinion, is not a proper solution in the long term, especially if you regularly pull photos off of your device, or load new music, for example.</p>

<h3 id="pixelmate">Pixelmate</h3>

<p>And thus, <a href="https://github.com/eladnava/pixelmate">Pixelmate</a> was born. Pixelmate is an open-source macOS app I built using <a href="https://electronjs.org/">Electron</a> that issues ADB commands behind the scenes to manage files on your Google Pixel device.</p>

<p>Pixelmate is designed to look and feel like the native macOS Finder for a familiar user experience:</p>

<p><img src="https://eladnava.com/content/images/2018/01/screenshot.png" alt="Manage Files on Your Google Pixel Like a Boss with Pixelmate"></p>

<p>Follow <a href="https://github.com/eladnava/pixelmate#usage">these instructions</a> to download Pixelmate and manage your files like a boss. </p>

<blockquote>
  <p><strong>Note:</strong> Since Pixelmate uses <a href="https://developer.android.com/studio/command-line/adb.html">ADB</a> behind the scenes, you need to enable Developer Mode and USB Debugging on your phone, if you haven't already. Check out <a href="https://www.howtogeek.com/129728/how-to-access-the-developer-options-menu-and-enable-usb-debugging-on-android-4.2/">this guide</a> for detailed instructions.</p>
</blockquote>

<p>You can drag files and folders into Pixelmate to upload them to your device, or right click remote files and folders to download them to your computer. I really wanted to implement dragging and dropping remote files to the local computer as well, but this is actually not possible at this time due to Electron platform limitations. I'm waiting on the Electron team to provide a solution to <a href="https://github.com/electron/electron/issues/11691">the issue</a>.</p>

<p>Navigation is currently done via the same keyboard shortcuts as in Finder:</p>

<ul>
<li><strong>Cmd + ↓</strong> to navigate into a folder</li>
<li><strong>Cmd + ↑</strong> to navigate out of a folder</li>
<li><strong>Cmd + Delete</strong> to delete a file or folder</li>
</ul>

<p>UI navigation buttons will surely be added in the future.</p>

<blockquote>
  <p><strong>Pro tip:</strong> You can even use Pixelmate wirelessly, without having to connect your phone via USB cable. Check out <a href="http://codetheory.in/android-debug-bridge-adb-wireless-debugging-over-wi-fi/">this guide</a> for detailed instructions, and check out <a href="https://github.com/eladnava/wifidev-android">this app</a> I built to avoid having to connect the phone to run <code>adb tcpip</code> before going wireless. Note that transfers will not be as fast as with a wired connection, though.</p>
</blockquote>

<h3 id="feedback">Feedback</h3>

<p>I'd love to hear what you think of Pixelmate! For me, it's a lifesaver. Let me know in the comments below!</p>]]></content:encoded></item><item><title><![CDATA[Scale Your EC2 Cluster using Custom Metrics with Scalemate]]></title><description><![CDATA[<p>If you are an avid reader of mine, you might have noticed that I haven't posted for quite some time (almost a year!). Time sure flies when you're having fun. I'll do my best to post much more in 2018.</p>

<p>And now, let's get to the matter at hand. </p>

<h2 id="motivation">Motivation</h2>]]></description><link>https://eladnava.com/scale-your-ec2-application-servers-using-custom-metrics-with-scalemate/</link><guid isPermaLink="false">f5257805-da87-489d-936d-ce912d08e88c</guid><category><![CDATA[Scalability]]></category><category><![CDATA[High Availability]]></category><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[Amazon CloudWatch]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Sun, 14 Jan 2018 04:49:34 GMT</pubDate><media:content url="https://eladnava.com/content/images/2018/01/aws-1.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2018/01/aws-1.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2018/01/aws-1.jpg" alt="Scale Your EC2 Cluster using Custom Metrics with Scalemate"><p>If you are an avid reader of mine, you might have noticed that I haven't posted for quite some time (almost a year!). Time sure flies when you're having fun. I'll do my best to post much more in 2018.</p>

<p>And now, let's get to the matter at hand. </p>

<h2 id="motivation">Motivation</h2>

<p>Have you ever needed to scale your application servers using a custom metric, such as available system memory or concurrent connections count? </p>

<p>Some application servers need to be scaled when memory becomes a bottleneck as each client adds to the application's memory utilization, and in other cases, applications can only support a finite number of concurrent socket connections before reaching their limit.</p>

<p>It still surprises me that AWS CloudWatch does not provide metrics for monitoring EC2 servers' memory utilization. It seems so trivial, especially since other metrics such as CPU Utilization, Disk I/O, and Network I/O are readily-available. Also, a memory metric would make monitoring your servers for memory leaks so much easier, instead of finding out about leaks after the out-of-memory killer terminates your app, causing downtime.</p>

<p>It would have been great if AWS provided more metrics out of the box.</p>

<p>But no matter, that's where Scalemate comes in! (Get it? stalemate; play on words!).</p>

<h2 id="scalemate">Scalemate</h2>

<p><a href="https://github.com/eladnava/scalemate">Scalemate</a> is a Node.js CLI package I built that scales your application servers by publishing custom system metrics to AWS CloudWatch. The following custom metrics are currently supported:</p>

<ul>
<li>Sockets Used - number of active client/server connections</li>
<li>Memory Available - amount of system memory available (in mb)</li>
</ul>

<p>In addition, Scalemate supports per-second metric resolution for scaling your cluster within seconds in response to high demand.</p>

<h2 id="usage">Usage</h2>

<p>Using Scalemate is super easy. Simply install Node.js on one of the servers in your cluster and then install Scalemate using npm:</p>

<pre><code>sudo npm install -g scalemate  
</code></pre>

<p>Then, create a file called <code>scalemate.js</code> in <code>/etc</code>:</p>

<pre><code>sudo nano /etc/scalemate.js  
</code></pre>

<p>Paste in the following contents:</p>

<pre><code class="language-js">module.exports = {  
    // Metrics to publish
    metrics: {
        // Number of open socket connections
        socketsUsed: {
            // Whether to publish this metric
            enabled: true,
            // CloudWatch unit type
            unit: 'Count',
            // CloudWatch metric title
            name: 'Sockets Used'
        },
        // Number of megabytes of system memory currently available
        memoryAvailable: {
            // Whether to publish this metric
            enabled: true,
            // CloudWatch unit type
            unit: 'Count',
            // CloudWatch metric title
            name: 'Memory Available'
        }
    },
    // Metric interval (in seconds)
    interval: 60,
    // CloudWatch namespace to associate metrics with
    namespace: 'MyApp',
    // AWS IAM user with CloudWatch read/write access
    credentials: {
        region: 'us-east-1',
        accessKeyId: 'ABCDEFG',
        secretAccessKey: 'ABCDEFGHIJK/HIJKLMNOPQRS'
    }
};
</code></pre>

<p>Modify the configuration according to your own needs, enabling or disabling metrics and configuring the following parameters:</p>

<ul>
<li><code>namespace</code> - a title for your app or server cluster</li>
<li><code>credentials</code> - an AWS IAM user with read/write access to CloudWatch</li>
</ul>

<p>You can create an IAM user in the <a href="https://console.aws.amazon.com/iam/home#/users">AWS Security Credentials</a> console.</p>

<p>Make sure to grant your IAM user the <code>CloudWatchFullAccess</code> policy for read/write access to CloudWatch.</p>

<h3 id="testing">Testing</h3>

<p>Test the configuration you created by running:</p>

<pre><code>scalemate -c /etc/scalemate.js  
</code></pre>

<p>Observe the terminal output for any initial errors and for successfully-published metrics. If no errors are emitted, you have successfully configured Scalemate!</p>

<p><img src="https://eladnava.com/content/images/2018/01/Screen-Shot-2018-01-14-at-3-03-45-PM.png" alt="Scale Your EC2 Cluster using Custom Metrics with Scalemate"></p>

<h2 id="verification">Verification</h2>

<p>Visit the CloudWatch console and find the published metrics under the <a href="https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#metricsV2:graph=~();namespace=Scalemate">Scalemate namespace</a>. </p>

<p>Select your app namespace and you should be able to see the custom metrics you configured!</p>

<h2 id="survivingreboots">Surviving Reboots</h2>

<p>To start Scalemate automatically after system reboots, edit your user's crontab by running:</p>

<pre><code>crontab -e  
</code></pre>

<p>Then, append the following line to the end of the crontab:</p>

<pre><code>@reboot /usr/bin/scalemate -c /etc/scalemate.js 2&gt; /tmp/scalemate.log &amp;
</code></pre>

<p>Save and reboot, then, verify that Scalemate is running:</p>

<pre><code>ps aux | grep scalemate  
</code></pre>

<p>Finally, create an image of the server you installed and configured Scalemate on, and configure your entire EC2 cluster to use the same image. That way, the entire cluster will be publishing these custom metrics to AWS CloudWatch.</p>

<p>CloudWatch will, in turn, average out the metrics reported by all the servers in your cluster and let you define scaling alarms based on metric average values.</p>

<h2 id="scalingyourcluster">Scaling Your Cluster</h2>

<p>Congratulations, you can now configure CloudWatch alarms in the <a href="https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#alarm:alarmFilter=ANY">AWS CloudWatch console</a> based on these custom metrics!</p>

<p>Simply edit the existing CloudWatch alarms for your Auto Scaling Group and modify the metric being monitored, selecting one of the custom Scalemate metrics and defining an applicable alarm threshold based on the metrics.</p>

<p>Have any suggestions on additional custom metrics that should be added to Scalemate? Let me know in the comments! =)</p>]]></content:encoded></item><item><title><![CDATA[Check Your JavaScript Dependencies' License Requirements with tldrlegal]]></title><description><![CDATA[<p>You've just finished working on your shiny new JavaScript project, after months of hacking away at it, living on nothing but granola bars and instant ramen noodles, and making use of hundreds of <a href="https://www.npmjs.com">npm</a> dependencies. The JavaScript ecosystem is great in that sense, where a package exists for almost everything</p>]]></description><link>https://eladnava.com/check-your-dependencies-license-requirements-with-tldrlegal/</link><guid isPermaLink="false">36e1533a-af49-473d-9227-da0bb39ee104</guid><category><![CDATA[Open Source]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Sat, 28 Jan 2017 21:06:29 GMT</pubDate><media:content url="https://eladnava.com/content/images/2017/01/tldrlegal.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2017/01/tldrlegal.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2017/01/tldrlegal.jpg" alt="Check Your JavaScript Dependencies' License Requirements with tldrlegal"><p>You've just finished working on your shiny new JavaScript project, after months of hacking away at it, living on nothing but granola bars and instant ramen noodles, and making use of hundreds of <a href="https://www.npmjs.com">npm</a> dependencies. The JavaScript ecosystem is great in that sense, where a package exists for almost everything you want to achieve, and reinventing the wheel is not usually necessary. </p>

<p>However, this comes at a price. The more dependencies you rely upon in your projects, the higher the chance one of those dependencies, or one of its dependencies, has a restrictive license that requires you to fulfill some unusual obligation.</p>

<h2 id="howunusual">How unusual?</h2>

<p>Did you know that some open source software licenses require you to disclose your source code in its entirety if you use a package with such a license, such as the <code>GPL-2.0</code> and <code>AFL-3.0</code> licenses?</p>

<p>Or that there are software licenses that require you to explicitly mention the software in all of your product's advertising materials, such as the original <code>BSD 4-Clause</code> license?</p>

<p>Some of these obligations are not easily met for commercial projects, which are usually closed source. Every organization has these, yes, even GitHub and npm do not open source all of their code.</p>

<p>Chances are, if your project has over 15 dependencies, at least one of their dependencies or their dependencies' dependencies is using a restrictive license with unusual obligations. If you don't check thoroughly and fulfill such obligations, you're susceptible to legal action by the package author(s), even if your project is free to use and open sourced.</p>

<p>Now, if you were to commercially distribute your project using a dependency with an unmet obligation, and that third party were to find out about it, well, let's hope that never happens.</p>

<p>You can easily prevent this from ever happening by using a new tool I released called <code>tldrlegal</code>.</p>

<h2 id="tldrlegal">tldrlegal</h2>

<p><a href="https://github.com/eladnava/tldrlegal">tldrlegal</a> is a Node.js command-line tool that checks your dependencies for license requirements using a legal resource called <a href="https://tldrlegal.com/">tldrlegal.com</a>, which provides plain English software license interpretations.</p>

<p><code>tldrlegal</code> makes use of <a href="https://github.com/franciscop/legally">legally</a>, a Node.js package that does an excellent job at determining your dependencies' licenses, using their <code>package.json</code> file, the <code>README.md</code> file, and the <code>LICENSE</code> file, since package maintainers use either of those to mention their license of choice. It turns out this is not the easiest of tasks, but <code>legally</code> still manages to do it with great accuracy.</p>

<p><a href="https://tldrlegal.com/">tldrlegal.com</a> lets you find pretty much any popular software license, and be able to quickly understand what you can and can't do with that license, as well as what you must do if you make use of software with such license. Without this website, <code>tldrlegal</code> could not exist.</p>

<h2 id="howtousetldrlegal">How to use tldrlegal</h2>

<p>Simply install <code>tldrlegal</code> globally via <code>npm</code> and run it in your project directory. The output will contain a summary and detailed information for each package with a licensing requirement, such as credit attribution, source disclosure, etc.</p>

<pre><code>npm install -g tldrlegal

cd my-js-project  
tldrlegal  
</code></pre>

<p>If any license restrictions are found, <code>tldrlegal</code> will output them to the console, along with a brief description:</p>

<p><img src="https://eladnava.com/content/images/2017/01/Screen-Shot-2017-01-28-at-8-18-39-PM.png" alt="Check Your JavaScript Dependencies' License Requirements with tldrlegal"></p>

<p>That's it, let me know what you think and if you have any ideas on how to improve <code>tldrlegal</code>!</p>

<h2 id="disclaimer">Disclaimer</h2>

<p>No legal-advising tool is ever complete without a proper disclaimer. </p>

<ol>
<li>This tool is not a replacement for proper legal consultation.  </li>
<li>Please be advised that the information provided by this tool may not be 100% accurate.</li>
</ol>]]></content:encoded></item><item><title><![CDATA[Publish a Universal Binary iOS Framework in Swift using CocoaPods]]></title><description><![CDATA[<p><a href="https://cocoapods.org/">CocoaPods</a> is the most popular dependency manager for Swift and Objective-C Cocoa projects, but chances are, you already knew that if you're here, wanting to publish your own CocoaPod to be used by others in their own projects.</p>

<p>Publishing a standard, open-source CocoaPod is relatively straightforward -- lots of tutorials</p>]]></description><link>https://eladnava.com/publish-a-universal-binary-ios-framework-in-swift-using-cocoapods/</link><guid isPermaLink="false">e5bba37a-a6f5-4ed8-938b-dfa55d45d7ee</guid><category><![CDATA[iOS]]></category><category><![CDATA[Xcode]]></category><category><![CDATA[Deployment]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Tue, 18 Oct 2016 16:58:00 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/10/cocoapods-1.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/10/cocoapods-1.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/10/cocoapods-1.jpg" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"><p><a href="https://cocoapods.org/">CocoaPods</a> is the most popular dependency manager for Swift and Objective-C Cocoa projects, but chances are, you already knew that if you're here, wanting to publish your own CocoaPod to be used by others in their own projects.</p>

<p>Publishing a standard, open-source CocoaPod is relatively straightforward -- lots of tutorials are widely-available that outline the process rather efficiently -- but there lacks a definitive guide on how to publish a <strong>universal, binary CocoaPod</strong>: one that does not disclose its source files and supports both physical iOS device architectures (<code>armv7</code>, <code>arm64</code>) and virtual iOS simulator architectures (<code>i386</code>, <code>x86_64</code>).</p>

<p>Sometimes, you are simply not at liberty to disclose the source code of a CocoaPod. You might work for a company that develops an SDK which, for competitive reasons, must not be open source. That's when distributing a universal, binary framework is a must.</p>

<h2 id="installcocoapods">Install CocoaPods</h2>

<p>Obviously, to create a pod, you need to install CocoaPods. </p>

<p>For Xcode 8, you'll need CocoaPods version <code>1.1.0.rc.3</code> or newer. If you already have CocoaPods installed (check by running <code>pod --version</code>) and its version is older than the aforementioned version, first uninstall it by running <code>sudo gem uninstall cocoapods</code>.</p>

<p>To install the latest version of CocoaPods, execute the following command:</p>

<pre><code>sudo gem install cocoapods --pre  
</code></pre>

<h2 id="createaproject">Create a Project</h2>

<p>Create a new Xcode project for your framework and select the <strong>Cocoa Touch Framework</strong> template:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-11-24-07-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>Enter a <strong>Product Name</strong> and choose <strong>Swift</strong> as the project language. For the purpose of writing this guide, I've chosen to create a framework called <strong>MySDK</strong> that exposes a method which simply prints a <code>String</code> to the console.</p>

<p>Feel free to replace this dummy implementation with your own functionality, and replace <strong>MySDK</strong> with your own framework name whenever mentioned in the rest of this guide.</p>

<p>After Xcode finishes creating the project, feel free to delete the <code>MySDK.h</code> file included in the template, as you won't be needing it.</p>

<h2 id="developfunctionality">Develop Functionality</h2>

<p>Time to actually write and expose some APIs in your framework. </p>

<p>Create a file called <code>MySDK.swift</code> within the <code>MySDK</code> group, with the following contents:</p>

<pre><code>import Foundation

public class MySDK {  
    public class func logToConsole(msg: String) {
        print(msg);
    }
}
</code></pre>

<p>Note that you must explicitly label all classes and methods you wish to expose in your framework with the <code>public</code> keyword, otherwise, they won't be accessible in other projects.</p>

<h2 id="createademoproject">Create a Demo Project</h2>

<p>To test the framework in action and make sure it works as expected, we'll create a demo iOS app within the framework project that depends on the framework and invokes its method(s).</p>

<p>Open the project editor for the <strong>MySDK</strong> target and click on <strong>Editor -> Add Target</strong> in the menu bar.</p>

<p>Select <strong>Single View Application</strong> as the template:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-11-47-44-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>For the product name, enter <strong>Demo</strong>, and select <strong>Swift</strong> as the project language. </p>

<p>When the demo target is created, navigate to its project editor, scroll down to the <strong>Embedded Binaries</strong> section, click the <strong>+</strong> icon, and select <strong>MySDK.framework</strong>:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-8-43-16-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<h2 id="interfacewiththeframework">Interface with the Framework</h2>

<p>Open the demo project's <code>AppDelegate.swift</code> file and import your framework:</p>

<pre><code>import MySDK  
</code></pre>

<p>In the <code>applicationDidBecomeActive</code> method, add the following code to log that the app is now active:</p>

<pre><code>MySDK.logToConsole(msg: "Application active")  
</code></pre>

<p>And in the <code>applicationDidEnterBackground</code> method, add a different message:</p>

<pre><code>MySDK.logToConsole(msg: "Application inactive")  
</code></pre>

<p>Run the demo project on your iOS device or simulator. The Xcode console should print <code>Application active</code> once the app finishes launching. </p>

<p>Press the <strong>Home</strong> button (Cmd + Shift + H on the iOS Simulator) and the console should print <code>Application inactive</code>.</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-11-53-17-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>It appears that the demo project is able to successfully interface with the framework! </p>

<h2 id="enablearchiving">Enable Archiving</h2>

<p>Cocoa Touch Frameworks, by default, cannot be archived. </p>

<p>You can enable your framework to be archived by editing the framework target's <strong>Build Settings</strong> and setting <strong>Skip Install</strong> to <strong>No</strong>:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-8-57-18-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>However, if you attempt to archive now, Xcode will only build the <code>armv7</code> and <code>arm64</code> executables, which would make it impossible to run the framework on the iOS simulator. </p>

<h2 id="generateuniversalframework">Generate Universal Framework</h2>

<p>Due to a <a href="http://stackoverflow.com/questions/29634466/how-to-export-fat-cocoa-touch-framework-for-simulator-and-device">bug in Xcode</a>, it is impossible to archive universal frameworks without relying on external scripts. To include binaries for the iOS simulator, you can configure a post-archive script that will build your framework for the iOS simulator after you archive it, and merge both the simulator and iOS binaries into one fat universal framework.</p>

<p>To have Xcode run the script after archiving automatically, select the <strong>MySDK</strong> target, click <strong>Product -> Scheme -> Edit Scheme</strong> (or Cmd + Shift + &lt;), and configure a <strong>Run Script</strong> post-action for the <strong>Archive</strong> command:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-9-17-26-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>Copy <a href="https://gist.github.com/eladnava/0824d08da8f99419ef2c7b7fb6d4cc78">this script</a> and paste it into the Run Script window (thanks to <a href="https://github.com/atsepkov/">@atsepkov</a> for the original script).</p>

<p>Be sure to select <strong>MySDK</strong> for the <strong>Provide build settings from</strong> setting and click <strong>Close</strong> to apply the changes.</p>

<h3 id="archiveframework">Archive Framework</h3>

<p>Finally, archive the framework by clicking <strong>Build -> Archive</strong> in the menu bar. If the option is greyed out, make sure to select a physical iOS device and not the iOS simulator.</p>

<p>Once the bundle is archived, the Xcode Organizer will pop up. Wait a few more seconds, and the Finder should also open up to your project directory with a universal <code>MySDK.framework</code> inside it.</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-9-20-00-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>You can verify that <code>MySDK.framework</code> is indeed a universal framework by running <code>file MySDK</code> within the <code>MySDK.framework</code> directory:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-18-at-9-08-48-PM.png" alt="Publish a Universal Binary iOS Framework in Swift using CocoaPods"></p>

<p>As the output suggests, the executable contains binaries for the <code>i386</code>, <code>x86_64</code>, <code>armv7</code> and <code>arm64</code> architectures, which makes it a universal, fat framework.</p>

<h2 id="cocoapodspecifications">CocoaPod Specifications</h2>

<p>Now that you have successfully exported the universal framework, let's distribute it as a CocoaPod! </p>

<p>Create a <code>MySDK.podspec</code> file in your project directory which will contain information about the CocoaPod you are publishing, such as its name, version, sources, and more. </p>

<p>Paste the following contents inside it:</p>

<pre><code>Pod::Spec.new do |s|  
    s.name              = 'MySDK'
    s.version           = '1.0.0'
    s.summary           = 'A really cool SDK that logs stuff.'
    s.homepage          = 'http://example.com/'

    s.author            = { 'Name' =&gt; 'sdk@example.com' }
    s.license           = { :type =&gt; 'Apache-2.0', :file =&gt; 'LICENSE' }

    s.platform          = :ios
    s.source            = { :http =&gt; 'http://example.com/sdk/1.0.0/MySDK.zip' }

    s.ios.deployment_target = '8.0'
    s.ios.vendored_frameworks = 'MySDK.framework'
end  
</code></pre>

<p>Feel free to customize the <code>.podspec</code> to your liking. Here is an explanation for some of the more non-trivial properties: </p>

<ul>
<li><code>s.license</code> - you must ship a license file with your CocoaPod, so go ahead and create a <code>LICENSE</code> file in your project directory with <a href="https://raw.githubusercontent.com/eladnava/mailgen/master/LICENSE">this as its content</a>.</li>
<li><code>s.source</code> - the hosted <code>.zip</code> location of your CocoaPod files (in your case, the <code>MySDK.framework</code> folder and the <code>LICENSE</code> file). More on this later.</li>
<li><code>s.ios.vendored_frameworks</code> - the path of the framework you are distributing within the <code>s.source</code> archive, after being decompressed.</li>
</ul>

<p>Did you know? When you publish a CocoaPod, the only file that actually gets pushed up to the <a href="https://github.com/CocoaPods/Specs">CocoaPods repository</a> is your <code>.podspec</code> file. Nothing else, which is why you must link to the hosted files via the <code>s.source</code> parameter.</p>

<h2 id="createaziparchive">Create a Zip Archive</h2>

<p>The easiest way to make your framework and license available with your CocoaPod is to archive them inside a <code>.zip</code>, upload it to your server, and link to it using the <code>s.source</code> parameter in the <code>.podspec</code> file. </p>

<p>Create the <code>MySDK.zip</code> file by running the following command in your project directory:</p>

<pre><code>zip -r MySDK.zip LICENSE MySDK.framework  
</code></pre>

<p>It's up to you to upload the <code>MySDK.zip</code> file to your server, in a path similar to the following:</p>

<pre><code>http://example.com/sdk/1.0.0/MySDK.zip  
</code></pre>

<p>You can also create a GitHub repository and push the <code>.zip</code> file to it.</p>

<p>Once you upload the <code>.zip</code>, link to it in the <code>s.source</code> parameter of the <code>.podspec</code> file.</p>

<h2 id="prepublishtesting">Pre-Publish Testing</h2>

<p>Before you publish, you will want to make sure that the CocoaPod <code>.podspec</code> is correctly configured to distribute your framework. </p>

<p>Create a brand new Xcode project and select <strong>Single View Application</strong> as its template.</p>

<p>Run the following command in the project directory:</p>

<pre><code>pod init  
</code></pre>

<p>Edit the <code>Podfile</code> and paste the following within the <code>target</code> declaration to reference a CocoaPod dependency from the local filesystem:</p>

<pre><code>pod 'MySDK', :podspec =&gt; '/Code/mysdk-pod/'  
</code></pre>

<p>Modify the <code>:podspec</code> to the local path of the framework project containing the <code>MySDK.podspec</code> file.</p>

<p>Save the file and run the following command to install the CocoaPod:</p>

<pre><code>pod install  
</code></pre>

<p>Reopen the test project via its newly-generated <code>.xcworkspace</code> file, and attempt to import your framework within the <code>AppDelegate.swift</code> file:</p>

<pre><code>import MySDK  
</code></pre>

<p>Also, attempt to access the various method(s), such as <code>MySDK.logToConsole</code>. Run the app on an iOS device. If everything worked as expected, you're ready to publish your framework!</p>

<h2 id="registeratrunkaccount">Register a Trunk Account</h2>

<p>To publish your <code>.podspec</code> file to the CocoaPods repository, you must first register an account with the CocoaPods Trunk. </p>

<blockquote>
  <p>The CocoaPods Trunk is an authentication and CocoaPods API service. To publish new or updated libraries to CocoaPods for public release, you will need to be registered with the Trunk and have a valid Trunk session on your current device.</p>
</blockquote>

<p>Register an account by running the following, entering your full name and e-mail address:</p>

<pre><code>pod trunk register you@email.com 'Full Name'  
</code></pre>

<p>Then, check your e-mail for a confirmation link and click it.</p>

<p>Your Trunk account is now activated and you can finally publish your CocoaPod!</p>

<h2 id="publishthepod">Publish the Pod</h2>

<p>Run the following command in the same directory as the <code>.podspec</code> to publish it to the CocoaPods repository:</p>

<pre><code>pod trunk push MySDK.podspec  
</code></pre>

<p>The CLI will validate your <code>.podspec</code> and attempt to install the CocoaPod by downloading the source <code>.zip</code> and validating its contents. If the command succeeds, you have just published your first universal binary CocoaPod!</p>

<h2 id="testagain">Test Again</h2>

<p>Create another test project, run <code>pod init</code>, and add the following to the <code>Podfile</code>:</p>

<pre><code>pod 'MySDK', '1.0.0'  
</code></pre>

<p>Then, run <code>pod install</code> and cross your fingers. </p>

<p>Reopen the test project's <code>.xcworkspace</code> file and once again, attempt to interface with your framework. Run the app on an iOS device. If everything worked as expected, you should be good to go!</p>

<p>Let me know if this guide helped you in the comments below!</p>]]></content:encoded></item><item><title><![CDATA[Send Push Notifications to iOS Devices using Xcode 8 and Swift 3]]></title><description><![CDATA[<p>Push notifications are a great way to ensure your users re-engage with your app every once in a while, but implementing them on iOS can be challenging, especially with all of the changes in Xcode and Swift, not to mention the various iOS versions which deprecate the notification classes we</p>]]></description><link>https://eladnava.com/send-push-notifications-to-ios-devices-using-xcode-8-and-swift-3/</link><guid isPermaLink="false">a8654008-68c0-4a0b-80a4-276d89b0df2f</guid><category><![CDATA[iOS]]></category><category><![CDATA[Push Notifications]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Thu, 06 Oct 2016 21:29:00 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/10/iphone2.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/10/iphone2.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/10/iphone2.jpg" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"><p>Push notifications are a great way to ensure your users re-engage with your app every once in a while, but implementing them on iOS can be challenging, especially with all of the changes in Xcode and Swift, not to mention the various iOS versions which deprecate the notification classes we grew accustomed to in the past.</p>

<p>The Internet is overflowing with guides on how to implement iOS push notifications -- however, many of these guides are cumbersome, complicated, not up-to-date with Swift 3 and Xcode 8, and/or don't provide backward-compatibility with all iOS versions that support Swift (iOS 7 - iOS 10). Also, they do not make use of the new APNs Auth Keys which greatly simplify the steps involved in sending push notifications.</p>

<p>By following this guide, you'll be able to implement push notifications in your iOS app and send notifications from Node.js, using the latest technologies and without much hassle!</p>

<p><img src="https://eladnava.com/content/images/2016/10/push.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<h2 id="preparations">Preparations</h2>

<p>First off, open your iOS project in Xcode 8. If you don't have Xcode 8 yet, be sure to update via the App Store. If you don't have an iOS project yet, simply create a new one. Make sure that your codebase has been updated to use Swift 3.</p>

<p>Second, make sure that you have an active <a href="https://developer.apple.com/programs/">Apple Developer Program Membership</a>, which costs <strong>$100/year</strong>. It is a requirement in order to send push notifications to your iOS app. Also, make sure Xcode is configured to use the iCloud account which contains your active Apple Developer Program Membership.</p>

<p>Third, make sure that your app has a <strong>Bundle Identifier</strong> configured in the project editor:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-06-at-6-59-19-PM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<h2 id="enablingpushnotifications">Enabling Push Notifications</h2>

<p>The first step in setting up push notifications is enabling the feature within Xcode 8 for your app. Simply go to the project editor for your target and then click on the <strong>Capabilities</strong> tab. Look for <strong>Push Notifications</strong> and toggle its value to <strong>ON</strong>:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-06-at-6-57-50-PM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Xcode should display two checkmarks indicating that the capability was successfully enabled. Behind the scenes, Xcode creates an <a href="https://developer.apple.com/account/ios/identifier/bundle">App ID</a> in the Developer Center and enables the <strong>Push Notifications</strong> service for your app.</p>

<h2 id="registeringdevices">Registering Devices</h2>

<p>Devices need to be uniquely identified to receive push notifications.</p>

<p>Every device that installs your app is assigned a unique device token by APNs that you can use to push it at any given time. Once the device has been assigned a unique token, it should be persisted in your backend database.</p>

<p>A sample device token looks like this:</p>

<pre><code>5311839E985FA01B56E7AD74334C0137F7D6AF71A22745D0FB50DED665E0E882  
</code></pre>

<p>To request a device token for the current device, open <code>AppDelegate.swift</code> and add the following to the <code>didFinishLaunchingWithOptions</code> callback function, before the <code>return</code> statement:</p>

<pre><code class="language-    ">// iOS 10 support
if #available(iOS 10, *) {  
    UNUserNotificationCenter.current().requestAuthorization(options:[.badge, .alert, .sound]){ (granted, error) in }
    application.registerForRemoteNotifications()
}
// iOS 9 support
else if #available(iOS 9, *) {  
    UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.badge, .sound, .alert], categories: nil))
    UIApplication.shared.registerForRemoteNotifications()
}
// iOS 8 support
else if #available(iOS 8, *) {  
    UIApplication.shared.registerUserNotificationSettings(UIUserNotificationSettings(types: [.badge, .sound, .alert], categories: nil))
    UIApplication.shared.registerForRemoteNotifications()
}
// iOS 7 support
else {  
    application.registerForRemoteNotifications(matching: [.badge, .sound, .alert])
}
</code></pre>

<p>In iOS 10, a new framework called <code>UserNotifications</code> was introduced and must be imported in order to access the <code>UNUserNotificationCenter</code> class. </p>

<p>Add the following import statement to the top of <code>AppDelegate.swift</code>:</p>

<pre><code>import UserNotifications  
</code></pre>

<p>Next, go to the project editor for your target, and in the General tab, look for the <strong>Linked Frameworks and Libraries</strong> section. </p>

<p>Click <code>+</code> and select <code>UserNotifications.framework</code>:</p>

<p><img src="https://eladnava.com/content/images/2016/10/Screen-Shot-2016-10-06-at-10-07-01-PM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Next, add the following callbacks in <code>AppDelegate.swift</code> which will be invoked when APNs has either successfully registered or failed registering the device to receive notifications:</p>

<pre><code>// Called when APNs has assigned the device a unique token
func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {  
    // Convert token to string
    let deviceTokenString = deviceToken.reduce("", {$0 + String(format: "%02X", $1)})

    // Print it to console
    print("APNs device token: \(deviceTokenString)")

    // Persist it in your backend in case it's new
}

// Called when APNs failed to register the device for push notifications
func application(_ application: UIApplication, didFailToRegisterForRemoteNotificationsWithError error: Error) {  
    // Print the error to console (you should alert the user that registration failed)
    print("APNs registration failed: \(error)")
}
</code></pre>

<p>It's up to you to implement logic that will persist the token in your application backend. Later in this guide, your backend server will connect to APNs and send push notifications by providing this very same device token to indicate which device(s) should receive the notification.</p>

<p>Note that the device token may change in the future due to various reasons, so use <a href="https://developer.apple.com/reference/foundation/userdefaults">NSUserDefaults</a>, a local key-value store, to persist the token locally and only update your backend when the token has changed, to avoid unnecessary requests.</p>

<p>Run your app on a physical iOS device (the iOS simulator cannot receive notifications) after making the necessary modifications to <code>AppDelegate.swift</code>. Look for the following dialog, and press <strong>OK</strong> to permit your app to receive push notifications:</p>

<p><img src="https://eladnava.com/content/images/2016/10/permission-main.jpg" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Within a second or two, the Xcode console should display your device's unique token. Copy it and save it for later. </p>

<h2 id="preparetoreceivenotifications">Prepare to Receive Notifications</h2>

<p>Add the following callback in <code>AppDelegate.swift</code> which will be invoked when your app receives a push notification sent by your backend server:</p>

<pre><code>// Push notification received
func application(_ application: UIApplication, didReceiveRemoteNotification data: [AnyHashable : Any]) {  
    // Print notification payload data
    print("Push notification received: \(data)")
}
</code></pre>

<p>Note that this callback will only be invoked whenever the user has either clicked or swiped to interact with your push notification from the lock screen / Notification Center, or if your app was open when the push notification was received by the device.</p>

<p>It's up to you to develop the actual logic that gets executed when a notification is interacted with. For example, if you have a messenger app, a "new message" push notification should open the relevant chat page and cause the list of messages to be updated from the server. Make use of the <code>data</code> object which will contain any data that you send from your application backend, such as the chat ID, in the messenger app example.</p>

<p>It's important to note that in the event your app is open when a push notification is received, the user will not see the notification at all, and it is up to you to notify the user in some way. <a href="http://stackoverflow.com/questions/14872088/get-push-notification-while-app-in-foreground-ios">This StackOverflow question</a> lists some possible workarounds, such as displaying an in-app banner similar to the stock iOS notification banner.</p>

<h2 id="generateanapnsauthkey">Generate an APNs Auth Key</h2>

<p>The next step involves generating an authentication key that will allow your backend server to authenticate with APNs when it wants to send one or more of your devices a push notification. </p>

<p>Up until a few months ago, the alternative to this was a painful process that involved filling out a Certificate Signing Request in Keychain Access, uploading it to the Developer Center, downloading a signed certificate, and exporting its private key from Keychain Access (not to mention converting both certificates to <code>.pem</code> format). This certificate would then expire and need to be renewed every year and would only be valid for one deployment scheme: Development or Production. </p>

<p>Thankfully, Apple has greatly simplified the process of authenticating with APNs with the introduction of APNs Auth Keys, which never expire (unless revoked by you) and work for all deployment schemes. </p>

<p>Open the <a href="https://developer.apple.com/account/ios/authkey/">Keys -> All</a> page in your Developer Center and click the <code>+</code> button to create a new Auth Key.</p>

<p><img src="https://eladnava.com/content/images/2017/07/Screen-Shot-2017-07-02-at-9-58-19-AM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>In the next page, enter a name for your key, enable <strong>APNs</strong> and click <strong>Continue</strong> at the bottom of the page. </p>

<p><img src="https://eladnava.com/content/images/2017/07/Screen-Shot-2017-07-02-at-9-59-29-AM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Finally, click <strong>Confirm</strong> in the next page. Apple will then generate a <code>.p8</code> key file containing your APNs Auth Key.</p>

<p><img src="https://eladnava.com/content/images/2017/07/Screen-Shot-2017-07-02-at-10-00-25-AM.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Download the <code>.p8</code> key file to your computer and save it for later. Also, be sure to write down the <strong>Key ID</strong> somewhere, as you'll need it later when connecting to APNs.</p>

<h2 id="sendpushnotifications">Send Push Notifications</h2>

<p>Now it's time to set up your backend to connect to APNs to send notifications to devices! For the purpose of this guide and for simplicity, I'll choose to do this in Node.js. If you already have a backend implemented in another development language, look for another guide better-tailored for you, or simply follow along to send a test push notification to your device.</p>

<p>Make sure you have <a href="https://nodejs.org/en/download/">Node.js v4</a> or newer installed on your local machine and run the following in a directory of your choice:</p>

<pre><code>mkdir apns  
cd apns  
npm init --yes  
npm install apn --save  
</code></pre>

<p>These commands will initiate a new Node.js project and install the amazing <a href="https://github.com/node-apn/node-apn"><code>apn</code></a> package from npm, which authenticates with APNs and sends your push notifications.</p>

<p>Next, copy the <code>.p8</code> file you just downloaded into the <code>apns</code> folder we created. Name it <code>apns.p8</code> for simplicity.</p>

<p>Create a new file in the <code>apns</code> folder named <code>app.js</code> using your favorite editor, and paste the following code inside:</p>

<pre><code>var apn = require('apn');

// Set up apn with the APNs Auth Key
var apnProvider = new apn.Provider({  
     token: {
        key: 'apns.p8', // Path to the key p8 file
        keyId: 'ABCDE12345', // The Key ID of the p8 file (available at https://developer.apple.com/account/ios/certificate/key)
        teamId: 'ABCDE12345', // The Team ID of your Apple Developer Account (available at https://developer.apple.com/account/#/membership/)
    },
    production: false // Set to true if sending a notification to a production iOS app
});

// Enter the device token from the Xcode console
var deviceToken = '5311839E985FA01B56E7AD74444C0157F7F71A2745D0FB50DED665E0E882';

// Prepare a new notification
var notification = new apn.Notification();

// Specify your iOS app's Bundle ID (accessible within the project editor)
notification.topic = 'my.bundle.id';

// Set expiration to 1 hour from now (in case device is offline)
notification.expiry = Math.floor(Date.now() / 1000) + 3600;

// Set app badge indicator
notification.badge = 3;

// Play ping.aiff sound when the notification is received
notification.sound = 'ping.aiff';

// Display the following message (the actual notification text, supports emoji)
notification.alert = 'Hello World \u270C';

// Send any extra payload data with the notification which will be accessible to your app in didReceiveRemoteNotification
notification.payload = {id: 123};

// Actually send the notification
apnProvider.send(notification, deviceToken).then(function(result) {  
    // Check the result for any failed devices
    console.log(result);
});
</code></pre>

<p>There are several things to do before running this code:</p>

<ol>
<li>Configure the <code>keyId</code> property with the APNs Auth Key ID (available at <a href="https://developer.apple.com/account/ios/certificate/key">https://developer.apple.com/account/ios/certificate/key</a>)  </li>
<li>Configure the <code>teamId</code> property with your Apple Developer Account Team ID (available at <a href="https://developer.apple.com/account/#/membership/">https://developer.apple.com/account/#/membership/</a>)  </li>
<li>Configure <code>deviceToken</code> with the device token you generated after running your application and checking the console  </li>
<li>Configure <code>notification.topic</code> with your application's Bundle ID which is accessible in the project editor</li>
</ol>

<p>Now, lock your device, run <code>node app.js</code> and lo-and-behold, provided you did everything right, your iOS device should be able to receive the notification!</p>

<p><img src="https://eladnava.com/content/images/2016/10/done_iphone6_spacegrey_portrait.png" alt="Send Push Notifications to iOS Devices using Xcode 8 and Swift 3"></p>

<p>Interacting with the notification will print the following in your Xcode console since <code>didReceiveRemoteNotification</code> is invoked:</p>

<pre><code>[AnyHashable("id"): 123, AnyHashable("aps"): {
    alert = "Hello World \U270c";
    badge = 3;
    sound = "ping.aiff";
}]
</code></pre>

<p>I hope you were able to get through this tutorial with ease. Let me know if this helped you in the comments below!</p>]]></content:encoded></item><item><title><![CDATA[Deploy a Highly-Available MongoDB Replica Set on AWS]]></title><description><![CDATA[<p>Ah, MongoDB. Arguably the leading NoSQL database available today, it makes it super easy to start hacking away on projects without having to worry about table schemas, while delivering extremely fast performance and lots of useful features.</p>

<p>However, running a scalable, highly-available MongoDB cluster is a whole 'nother story. You</p>]]></description><link>https://eladnava.com/deploy-a-highly-available-mongodb-replica-set-on-aws/</link><guid isPermaLink="false">53434094-f090-4430-aa98-2ac53c756368</guid><category><![CDATA[System Administration]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[High Availability]]></category><category><![CDATA[Amazon Web Services]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Sun, 24 Jul 2016 12:31:00 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/07/mongodb-1.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/07/mongodb-1.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/07/mongodb-1.jpg" alt="Deploy a Highly-Available MongoDB Replica Set on AWS"><p>Ah, MongoDB. Arguably the leading NoSQL database available today, it makes it super easy to start hacking away on projects without having to worry about table schemas, while delivering extremely fast performance and lots of useful features.</p>

<p>However, running a scalable, highly-available MongoDB cluster is a whole 'nother story. You need a good understanding of how replica sets work and be familiar with the inner workings of MongoDB. And you need to set up tooling to constantly monitor your cluster for replication lag, CPU usage, disk space utilization, etc, as well as periodically back up your databases to prevent data loss.</p>

<p>While solutions such as <a href="https://www.compose.com/mongodb/">Compose</a> and <a href="https://www.mongodb.com/cloud">MongoDB Atlas</a> could save you the time and effort required in setting up and maintaining your own cluster, these solutions give you less control in the end -- your data and uptime is in another company's hands, in addition to AWS.</p>

<p>I have not been fortunate with these kinds of solutions -- I experienced several unexpected instances of downtime that my services cannot tolerate. When things went wrong, all I could do was open a support ticket and wait (sometimes days!) until the engineers would be able to resolve the issue.</p>

<p>If the database is the single most important part of your application, leave the controls in your hands. Setting up our own cluster is actually not that hard, let's get to it!</p>

<p><strong>Note:</strong> This is an extremely comprehensive guide, make sure you have at least an hour to spare.</p>

<h2 id="replicasets">Replica Sets</h2>

<p>So, what is a replica set? Put simply, it is a group of MongoDB servers operating in a primary/secondary failover fashion. At any point there can only be one primary member within the replica set, however, you can have as many secondaries as you want. All secondaries actively replicate data off of the current primary member so that if it fails, one of them will be able to take over quite seamlessly as the new primary. They do this by examining the primary member's oplog, a file that contains a log of every single write query performed against the server.</p>

<p>The more secondaries you have, and the more spread out they are over availability zones or regions, the less chance your cluster will ever experience downtime.</p>

<p>Your application will usually only run queries against the primary member in the replica set.</p>

<p><img src="https://eladnava.com/content/images/2016/07/replica-set-read-write-operations-primary.png" alt="Deploy a Highly-Available MongoDB Replica Set on AWS"></p>

<h2 id="replicasetmembers">Replica Set Members</h2>

<p>The most minimal replica set setup must have at least three healthy members to operate. One member will serve as the <strong>primary</strong>, another as the <strong>secondary</strong>, and the third as an <strong>arbiter</strong>. </p>

<p>Arbiters are members that participate in elections in order to break ties and do not actually replicate any data. If a replica set has an even number of members, we must add an arbiter member to act as a tie-breaker, otherwise, when the primary member fails or steps down, a new primary will not be elected!</p>

<p><img src="https://eladnava.com/content/images/2016/07/replica-set-primary-with-secondary-and-arbiter.png" alt="Deploy a Highly-Available MongoDB Replica Set on AWS"></p>

<p>This requirement was put in place to prevent <a href="https://en.wikipedia.org/wiki/Split-brain_(computing)">split-brain scenarios</a> where 2 secondary members in a cluster who can't communicate with each other decide to vote for themselves, causing them both to become primaries, leading to data inconsistency and a plethora of other problems.</p>

<p>You can also avoid an arbiter by simply adding another secondary instead. All you really need is an odd number of members for elections to be held properly. However, an extra secondary will cost you more than an arbiter.</p>

<h2 id="instancetypes">Instance Types</h2>

<p>The data members in the replica set should be deployed to an instance type that suits your application's needs. Depending on your traffic, queries per second, and data size, you'll need to pick a matching instance type to accommodate that workload. The good news is that you can upgrade your instance type in the future in a matter of minutes and without downtime by utilizing replica set step downs, as we'll see later on.</p>

<p>I usually go with either <code>m3.medium</code> or <code>m4.large</code> for a production application with about 50 queries per second. If you're just starting to work on a new project, even <code>t2.nano</code> will do just fine. Note that <code>t2</code> instances have limited CPU credits and should not be used for high-throughput deployments, since their compute capacity is unpredictable.</p>

<p>It is absolutely fine to host the arbiter member on a weak instance type such as <code>t2.nano</code>, since all it will ever do is participate in elections. </p>

<h2 id="instancestorage">Instance Storage</h2>

<p>Always provision General Purpose (<code>gp2</code>) storage for MongoDB data members as the underlying disk is a network-attached SSD which will provide better read/write speeds than magnetic storage.</p>

<p>Note that if you select an <code>m4.large</code> instance type or larger, you'll also get <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html">EBS optimization</a> which will provide your instance with dedicated IO throughput to your EBS storage, increasing the number of queries per second your cluster will be able to handle, as well as preventing replication lag to your secondaries.</p>

<p>In addition, if you want to maximize your read/write IO throughput rate and your workload is big enough, consider using <a href="https://aws.amazon.com/ebs/details/">Provisioned IOPS</a> (<code>io1</code>) storage. This can be quite expensive though, depending on the number of IOPS you provision, so make sure you understand the <a href="https://aws.amazon.com/ebs/pricing/">pricing implications</a>.</p>

<h1 id="getstarted">Get Started</h1>

<p>The first step in setting up the replica set is to prepare the instances for running MongoDB and to make sure you have your own domain name.</p>

<h2 id="provisiontheinstances">Provision the Instances</h2>

<p>Spin up 3 brand-new <strong>Ubuntu 14.04 LTS</strong> instances in the <a href="https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:sort=tag:Name">EC2 console</a>, making sure to set up each one in a different availability zone, for increased availability in case of service outage in one AZ. Provision enough storage to fit your data size, and select the appropriate instance types for each replica set member. Also, create an EC2 key pair so that you can SSH into the instances. </p>

<p>Create a new security group, <code>mongodb-cluster</code>, and configure all three instances to use it. Allow SSH on port <code>22</code> from your IP only and port <code>27017</code> from the <code>mongodb-cluster</code> security group (<code>sg-65d4d11d</code> for example) as well as from your IP address, so that both you and the replica set members will be able to connect to each other's <code>mongod</code> process listening on port <code>27017</code>.</p>

<p>Next, request 3x <strong>Elastic IPs</strong> and attach them to each instance, so that your members will maintain the same public IP throughout their entire lifetime.</p>

<p>Finally, label each instance you created as follows, replacing <code>example.com</code> with your domain name:</p>

<ul>
<li><strong>Data</strong> - db1.example.com</li>
<li><strong>Data</strong> - db2.example.com</li>
<li><strong>Arbiter</strong> - arbiter1.example.com</li>
</ul>

<h2 id="setupdnsrecords">Setup DNS Records</h2>

<p>Head over to your domain's DNS management interface and add <code>CNAME</code> records for <strong>db1</strong>, <strong>db2</strong>, and <strong>arbiter1</strong>. For each record, enter each instance's <strong>Public DNS</strong> hostname, visible in the EC2 instances dashboard.</p>

<p><img src="https://eladnava.com/content/images/2016/07/Screen-Shot-2016-07-26-at-3-06-53-PM.png" alt="Deploy a Highly-Available MongoDB Replica Set on AWS"></p>

<blockquote>
  <p><strong>Pro tip:</strong> When your EC2 servers perform a DNS query to translate the Public DNS hostname to an IP, the EC2 DNS server will actually return the private IP address of the instance since it's in the same VPC as the instance performing the DNS query, thereby improving latency and bandwidth between the replica set members, and saving you from paying bandwidth costs.</p>
</blockquote>

<h2 id="configuringtheservers">Configuring the Servers</h2>

<p>Before we can get the replica set up and running, we need to make a few modifications to the underlying OS so that it behaves nicely with MongoDB.</p>

<h3 id="setthehostname">Set the Hostname</h3>

<p>SSH into each server and set its hostname so that when we initialize the replica set, members will be able to understand how to reach one another:</p>

<pre><code data-language="shell">sudo bash -c 'echo db1.example.com > /etc/hostname && hostname -F /etc/hostname'</code></pre>

<p>Make sure to modify <code>db1.example.com</code> and set it to each server's DNS hostname. The first command will set the server's hostname in <code>/etc/hostname</code>, the second will apply it without having to reboot the machine.</p>

<p>Repeat this step on all replica set members. </p>

<h3 id="increaseoslimits">Increase OS Limits</h3>

<p>MongoDB needs to be able to create file descriptors when clients connect and spawn a large number of processes in order to operate effectively. The default file and process limits shipped with Ubuntu are not applicable for MongoDB.</p>

<p>Modify them by editing the <code>limits.conf</code> file:</p>

<pre><code data-language="shell">sudo nano /etc/security/limits.conf</code></pre>

<p>Add the following lines to the end of the file:</p>

<pre><code>* soft nofile 64000
* hard nofile 64000
* soft nproc 32000
* hard nproc 32000
</code></pre>

<p>Next, create a file called <code>90-nproc.conf</code> in <code>/etc/security/limits.d/</code>:</p>

<pre><code data-language="shell">sudo nano /etc/security/limits.d/90-nproc.conf</code></pre>

<p>Paste the following lines into the file:</p>

<pre><code>* soft nproc 32000
* hard nproc 32000
</code></pre>

<p>Repeat this step on all replica set members. </p>

<h3 id="disabletransparenthugepages">Disable Transparent Huge Pages</h3>

<p>Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.</p>

<p>However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns. You should disable THP to ensure best performance with MongoDB.</p>

<p>Run the following commands to create an init script that will automatically disable THP on system boot:</p>

<pre><code data-language="shell">sudo nano /etc/init.d/disable-transparent-hugepages</code></pre>

<p>Paste the following inside it:</p>

<pre><code data-language="shell">#!/bin/sh
### BEGIN INIT INFO
# Provides:          disable-transparent-hugepages
# Required-Start:    $local_fs
# Required-Stop:
# X-Start-Before:    mongod mongodb-mms-automation-agent
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description:       Disable Linux transparent huge pages, to improve
#                    database performance.
### END INIT INFO

case $1 in
  start)
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/transparent_hugepage
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/redhat_transparent_hugepage
    else
      return 0
    fi

    echo 'never' > ${thp_path}/enabled
    echo 'never' > ${thp_path}/defrag

    unset thp_path
    ;;
esac</code></pre>

<p>Make it executable:  </p>

<pre><code data-language="shell">sudo chmod 755 /etc/init.d/disable-transparent-hugepages</code></pre>

<p>Set it to start automatically on boot:  </p>

<pre><code data-language="shell">sudo update-rc.d disable-transparent-hugepages defaults</code></pre>

<p>Repeat this step on all replica set data members.</p>

<h3 id="turnoffcoredumps">Turn Off Core Dumps</h3>

<p>MongoDB generates core dumps on some <code>mongod</code> crashes. For production environments, you should turn off core dumps since generating them can take minutes or even hours in case your workload is large.</p>

<pre><code data-language="shell">sudo nano /etc/default/apport</code></pre>

<p>Find:</p>

<pre><code data-language="shell">enabled=1</code></pre>

<p>Replace with:</p>

<pre><code data-language="shell">enabled=0</code></pre>

<h3 id="configurethefilesystem">Configure the Filesystem</h3>

<p>Linux by default will update the last access time when files are modified. When MongoDB performs frequent writes to the filesystem, this will create unnecessary overhead and performance degradation. We can disable this feature by editing the <code>fstab</code> file:</p>

<pre><code data-language="shell">sudo nano /etc/fstab</code></pre>

<p>Add the <code>noatime</code> flag directly after <code>defaults</code>:</p>

<pre><code data-language="shell">LABEL=cloudimg-rootfs   /        ext4   defaults,noatime,discard        0 0</code></pre>

<h4 id="readaheadblocksize">Read Ahead Block Size</h4>

<p>In addition, the default disk read ahead settings on EC2 are not optimized for MongoDB. The number of blocks to read ahead should be adjusted to approximately 32 blocks (or 16 KB) of data. We can achieve this by adding a <code>crontab</code> entry that will execute when the system boots up:</p>

<pre><code data-language="shell">sudo crontab -e</code></pre>

<p>Choose <code>nano</code> by pressing <code>2</code> if this is your first time editing the crontab, and then append the following to the end of the file:</p>

<pre><code>@reboot /sbin/blockdev --setra 32 /dev/xvda1</code></pre>

<p>Make sure that your EBS volume is mounted on <code>/dev/xvda1</code>. Save the file and reboot the server:</p>

<pre><code data-language="shell">sudo reboot</code></pre>

<p>Repeat this step on all replica set data members.</p>

<h2 id="verification">Verification</h2>

<p>After rebooting, you can check whether the new hostname is in effect by running:</p>

<pre><code data-language="shell">hostname</code></pre>

<p>Check that the OS limits have been increased by running:</p>

<pre><code data-language="shell">ulimit -u # max number of processes
ulimit -n # max number of open file descriptors</code></pre>

<p>The first command should output <code>32000</code>, the second <code>64000</code>.</p>

<p>Check whether the Transparent Huge Pages feature was disabled successfully by issuing the following commands:</p>

<pre><code data-language="shell">cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag</code></pre>

<p>For both commands, the correct output resembles:</p>

<pre><code data-language="shell">always madvise [never]</code></pre>

<p>Check that <code>noatime</code> was successfully configured:</p>

<pre><code data-language="shell">cat /proc/mounts | grep noatime</code></pre>

<p>It should print a line similar to:  </p>

<pre><code data-language="shell">/dev/xvda1 / ext4 rw,noatime,discard,data=ordered 0 0</code></pre>

<p>In addition, verify that the disk read-ahead value is correct by running:</p>

<pre><code data-language="shell">sudo blockdev --getra /dev/xvda1</code></pre>

<p>It should print <code>32</code>.</p>

<h3 id="installmongodb">Install MongoDB</h3>

<p>Run the following commands to install the latest stable <code>3.4.x</code> version of MongoDB:</p>

<pre><code data-language="shell">sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
echo "deb [ arch=amd64 ] http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list

sudo apt-get update
sudo apt-get install -y mongodb-org</code></pre>

<p>These commands will also auto-start <code>mongod</code>, the MongoDB daemon. Repeat this step on all replica set members.</p>

<h3 id="configuremongodb">Configure MongoDB</h3>

<p>Now it's time to configure MongoDB to operate in replica set mode, as well as allow remote access to the server.</p>

<pre><code data-language="shell">sudo nano /etc/mongod.conf</code></pre>

<p>Find and remove the following line entirely, or prefix it with a <code>#</code> to comment it out:</p>

<pre><code>bindIp: 127.0.0.1</code></pre>

<p>Next, find:</p>

<pre><code>#replication:</code></pre>

<p>Add the following below, replacing <code>example-replica-set</code> with a name for your replica set:</p>

<pre><code>replication:
 replSetName: "example-replica-set"</code></pre>

<p>Finally, restart MongoDB to apply the changes:</p>

<pre><code data-language="shell">sudo service mongod restart</code></pre>

<p>Make these modifications on all of your members, making sure to specify the same exact replica set name when configuring the other members.</p>

<h3 id="initializethereplicaset">Initialize the Replica Set</h3>

<p>Connect to one of the MongoDB instances (preferably <code>db1</code>) to initialize the replica set and declare its members. Note that you only have to run these commands on one of the members. MongoDB will synchronize the replica set configuration to all of the other members automatically.</p>

<p>Connect to MongoDB via the following command:</p>

<pre><code data-language="shell">mongo db1.example.com</code></pre>

<p>Initialize the replica set:</p>

<pre><code>rs.initiate()</code></pre>

<p>The command will automatically add the current member as the first member of the replica set.</p>

<p>Add the second data member to the replica set:</p>

<pre><code>rs.add("db2.example.com")</code></pre>

<p>And finally, add the arbiter, making sure to pass in <code>true</code> as the second argument (which denotes that the member is an arbiter and not a data member).</p>

<pre><code>rs.add("arbiter1.example.com", true)</code></pre>

<h3 id="verifyreplicasetstatus">Verify Replica Set Status</h3>

<p>Take a look at the replica set status by running:</p>

<pre><code>rs.status()</code></pre>

<p>Inspect the <code>members</code> array. Look for one <code>PRIMARY</code>, one <code>SECONDARY</code>, and one <code>ARBITER</code> member. All members should have a <code>health</code> value of <code>1</code>. If not, make sure the members can talk to each other on port <code>27017</code> by using <code>telnet</code>, for example.</p>

<h2 id="setuplogrotation">Setup Log Rotation</h2>

<p>By default, MongoDB will fill up the <code>/var/log/mongodb/mongod.log</code> file with gigabytes of data. It will be very hard to work with this log file if we do not set up log rotation in advance.</p>

<p>Install <code>logrotate</code> as follows:</p>

<pre><code>sudo apt-get install logrotate</code></pre>

<p>Configure log rotation for MongoDB:</p>

<pre><code>sudo nano /etc/logrotate.d/mongod</code></pre>

<p>Paste the following contents:</p>

<pre><code>/var/log/mongodb/*.log {
    daily
    rotate 5
    compress
    dateext
    missingok
    notifempty
    sharedscripts
    copytruncate
    postrotate
        /bin/kill -SIGUSR1 `cat /var/lib/mongodb/mongod.lock 2> /dev/null` 2> /dev/null || true
    endscript
}</code></pre>

<p>This will set up daily log rotation for <code>mongod.log</code> as well as send the <code>SIGUSR1</code> signal to <code>mongod</code> when the log file is rotated so that it starts writing to the new log file.</p>

<h2 id="replicasetadministration">Replica Set Administration</h2>

<p>Now that your replica set is highly-available and healthy, let's go over how to manage it.</p>

<h3 id="connectingtoyourreplicaset">Connecting to Your Replica Set</h3>

<p>To connect to any member of the replica set, simply run:</p>

<pre><code>mongo db1.example.com</code></pre>

<p>Replace <code>db1.example.com</code> with any of the replica set member hostnames.</p>

<p>To send queries to your replica set from your application, simply use a MongoDB driver along with the following connection string:</p>

<pre><code>mongodb://db1.example.com,db2.example.com/db-name?replicaSet=example-replica-set</code></pre>

<p>Make sure to replace <code>example.com</code> with your domain, <code>db-name</code> with the database you want to run queries against, and <code>example-replica-set</code> with the replica set name you configured in <code>mongod.conf</code>.</p>

<h3 id="performingmaintenanceonthereplicaset">Performing Maintenance on the Replica Set</h3>

<p>If you want to perform some kind of maintenance on a member of the replica set, make sure it's a secondary member. You do not want to shut down the primary member without stepping down and letting another secondary become the primary first.</p>

<p>Run the following command on the secondary member's <code>mongo</code> shell:</p>

<pre><code>db.shutdownServer()</code></pre>

<p>Feel free to reboot the instance, modify its instance type, add more storage, provision more IOPS, etc. When you're done, simply start up the server and make sure <code>mongod</code> is running. The secondary member will catch up to the primary by examining its oplog and replicating anything it missed during its downtime window.</p>

<p>One thing to note though, is that you should not shut down secondaries for too long, otherwise, the primary's oplog will be truncated and they won't be able to catch up. Not the end of the world, but this will require you to perform a <a href="https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/">full resync of the secondary member(s)</a> which might take time.</p>

<p>When you're done performing maintenance on all of the secondaries in the replica set, make sure all members of the replica set are healthy and then issue the following command on the primary member's <code>mongo</code> shell to ask it to step down and let another secondary take its place:</p>

<pre><code>rs.stepDown()</code></pre>

<p>An election will then take place and the replica set members will vote for a new primary member. This can take anywhere from 10 to 50 seconds. During the election, the replica set will be unavailable for writes, since there is no primary member while voting takes place. Assuming you have an odd number of members, and there are healthy secondary members, a new primary will be elected and the replica set will be writable again.</p>

<h3 id="survivingstepdowns">Surviving Step Downs</h3>

<p>Your application must be prepared to deal with step downs by queueing up the writes and reattempting them when the new primary has been elected. </p>

<p>This can easily be achieved with a Node.js package I developed called <a href="https://www.npmjs.com/package/monkster">monkster</a> which abstracts this for you automatically by implementing a retry mechanism when the replica set is unavailable due to a missing primary or other temporary network error.</p>

<h3 id="automatedbackups">Automated Backups</h3>

<p>It's a good idea to set up a mechanism to automatically back up your database(s) every day to Amazon S3. If you accidentally delete an entire collection, secondaries will replicate that change and delete it locally as well, so backups will protect you from human error.</p>

<p>Check out <a href="https://gist.github.com/eladnava/96bd9771cd2e01fb4427230563991c8d">mongodb-s3-backup.sh</a>, a shell script I created that will automatically back up one of your databases to S3. You can configure it to run on an arbiter, for example, and have it read the data from a secondary, to avoid impacting the primary's performance. Read the gist for further instructions.</p>

<h3 id="replicasetmonitoring">Replica Set Monitoring</h3>

<p>It's important to constantly monitor your replica set to avoid downtime or other problematic situations caused by network issues or insufficient resources.</p>

<p>The following should be monitored via a script:</p>

<ul>
<li>The health status of the replica set (available via <code>rs.status()</code>)</li>
<li>The health status of each replica set member, from the point of view of each member</li>
<li>The minimum number of replica set members (should be 3 or more)</li>
<li>The number of replica set members should be odd, not even</li>
<li>The existence of a primary replica set member (this may fail if an election is in progress)</li>
<li>The last heartbeat timestamp from one member to another being less than 3 minutes ago from the point of view of all members</li>
<li>The oplog date on secondary members, which indicates if they've fallen behind on replication (it should not exceed 15 minutes ago)</li>
<li>The remaining disk space does not exceed 80% on each and every member</li>
<li>A recent S3 backup exists in case things go south</li>
</ul>

<p>I developed a Node.js package that monitors most of these for you called <a href="https://github.com/eladnava/mongomonitor">mongomonitor</a>, be sure to check it out!</p>

<h2 id="thatsit">That's it!</h2>

<p>Well done! You've just finished deploying your very own highly-available MongoDB replica set on AWS! Let me know if this helped you!</p>]]></content:encoded></item><item><title><![CDATA[Save up to 90% on Your AWS Bill with Spot Instances]]></title><description><![CDATA[<p>Is the majority of your AWS monthly bill made up of thousands of on-demand EC2 instance hours, steadily increasing month by month? </p>

<p>If your organization knows it'll be around for years to come, you've probably already purchased Reserved Instances to attempt to lower the cost (by paying in advance for</p>]]></description><link>https://eladnava.com/save-up-to-90-percent-on-your-aws-bill-with-spot-instances/</link><guid isPermaLink="false">4ff128f8-75c2-4457-9c1a-c5cf2ee3ec74</guid><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[System Administration]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Fri, 01 Jul 2016 09:21:00 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/07/aws.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/07/aws.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/07/aws.jpg" alt="Save up to 90% on Your AWS Bill with Spot Instances"><p>Is the majority of your AWS monthly bill made up of thousands of on-demand EC2 instance hours, steadily increasing month by month? </p>

<p>If your organization knows it'll be around for years to come, you've probably already purchased Reserved Instances to attempt to lower the cost (by paying in advance for instances which leads to up to 75% cost reduction, depending on how long you pay in advance for). But this pricing model isn't applicable to most startups and small businesses as they may shut down abruptly in a moment's notice. Whether you utilize Reserved Instances or not, there's another nifty trick you can take advantage of to get up to <strong>90% discount</strong> on the on-demand EC2 instance price, without paying anything in advance!</p>

<p>Even if your organization doesn't seem to care about optimizing the AWS monthly bill, I'm sure they'd appreciate the huge savings you could harness by utilizing Spot instances correctly! You might even be able to convince the management to organize a company-wide recreational day with the money you save on the monthly AWS bill.</p>

<h2 id="spotinstances">Spot Instances</h2>

<p>Their name is a bit misleading -- they are pretty much regular EC2 instances, but simply priced much lower (usually), up to 90% off the on-demand price, and with different life expectancy. Also, they are not a new type of offering -- they actually exist for <a href="https://aws.amazon.com/about-aws/whats-new/2009/12/14/announcing-amazon-ec2-spot-instances/">quite some time now</a>, and yet, most AWS customers still don't take full advantage of them.</p>

<p>Why does AWS offer the same underlying hardware for much less, without any upfront payment? Here's why.</p>

<p>AWS provisions <a href="http://www.bloomberg.com/news/2014-11-14/5-numbers-that-illustrate-the-mind-bending-size-of-amazon-s-cloud.html">millions of physical servers</a> in advance for their on-demand EC2 instance service. They need to be prepared for when their clients decide they want to spin up 1,000 <code>m4.large</code> instances within a few minutes time, without any notice in advance.</p>

<p><img src="https://eladnava.com/content/images/2016/07/aws-1.jpg" alt="Save up to 90% on Your AWS Bill with Spot Instances"></p>

<p>Since AWS will probably always be ready to grant you your on-demand instances when you need them, they have to provision the hardware in advance and keep it running to support your request in a moment's notice. This creates a situation where tons of EC2 compute power is being actively wasted -- physical servers just sitting there doing absolutely nothing, wasting precious resources and money for Amazon (electricity / hardware / cooling / etc), as they wait for clients to utilize them by starting up on-demand instances. There will always be spare compute power in the AWS cloud, waiting for clients to utilize it.</p>

<blockquote>
  <p>Imagine you had to file in a request for on-demand instances and wait a few hours until AWS manually plugged in and provisioned more physical servers in its data centers to fulfill your request! That wouldn't be much fun.</p>
</blockquote>

<h3 id="sparecomputepower">Spare Compute Power</h3>

<p>Spot instances are Amazon's solution to this problem -- it is essentially a stock market for spare compute power. They sell any excess compute power for ridiculously low hourly rates -- usually 80% - 90% off the on-demand price for each supported instance type (not all instance types are available as Spot instances). The Spot price, or the hourly cost of running a Spot instance, is specific to each instance type and availability-zone and is decided by Amazon's secret algorithm which factors in supply and demand of this spare compute power.</p>

<blockquote>
  <p>Spot instances let you bid on spare Amazon EC2 instances to name your own price for compute capacity. The Spot price fluctuates based on the supply and demand of available EC2 capacity. Your Spot instance is launched when your bid exceeds the current Spot market price, and will continue run until you choose to terminate it, or until the Spot market price exceeds your bid. </p>
</blockquote>

<p>Unfortunately, since this is a market, the price can fluctuate both ways -- the Spot price will 95% of the time be substantially less than the on-demand price, however, when demand for compute power within the AWS cloud grows, or when some clients bid too high for Spot instances, the Spot price may spike and actually surpass the on-demand price for a short period of time, shutting down your server within 2 minutes notice if the Spot price exceeds your bid price.</p>

<h3 id="showmethemoney">Show Me the Money</h3>

<p>So how does the Spot price usually look like?</p>

<p>Here's a graph representing 3 months of Spot pricing history for the <code>m4.large</code> instance type in four different <code>us-east-1</code> AZs, generated on July 1st of 2016:</p>

<p><img src="https://eladnava.com/content/images/2016/07/history.jpg" alt="Save up to 90% on Your AWS Bill with Spot Instances"></p>

<p>For reference, the standard on-demand price for <code>m4.large</code> is <code>$0.12</code> per instance hour, approximately <code>$86.40</code> per month. </p>

<p>As you can see from the graph, most of the time, the Spot price fluctuates between <code>$0.0185</code> (85% less than on-demand, <code>$13.32</code> / month) and <code>$0.0276</code> (77% less than on-demand, <code>$19.87</code> / month), except for when it peaks ridiculously, for relatively short periods of time, most likely due to high demand in a certain AZ. Paying around <code>$15</code> per month instead of <code>$86.40</code> for an <code>m4.large</code> instance sounds like a hell of a deal to me!</p>

<p>It's also worth noting that most of the time, the Spot price peaks in only one AZ at a time. So if you spread out your Spot instances in multiple AZs (you should already be doing this for high availability anyway), there is much less chance that all of your Spot instances will terminate at once.</p>

<p>You can view an up-to-date pricing history graph by checking out the <a href="https://console.aws.amazon.com/ec2sp/v1/spot/launch-wizard">Spot instance launch wizard</a> in the AWS Console.</p>

<h2 id="fearofspotinstances">Fear of Spot Instances</h2>

<p>I speculate that AWS customers are afraid of using Spot for the following reasons:</p>

<ul>
<li>The workflow involved in provisioning them is cluttered and messy</li>
<li>People are scared off by the fact that Spot instances may shut down abruptly as the Spot price increases past their max bid price</li>
<li>Spot instances are not supported in all AWS services (for example, in Elastic Beanstalk, however, there are <a href="https://forums.aws.amazon.com/message.jspa?messageID=707034">hacks to make them work in EB as well</a>)</li>
</ul>

<p>Furthermore, most people don't really understand how to utilize Spot instances safely. They tend to go with a radical approach to using Spot -- all or nothing. You see, you shouldn't be relying on Spot instances for 100% of your workload. What you should be doing is <strong>supplementing</strong> your on-demand instances with Spot instances, where appropriate.</p>

<p>I'd say it's pretty much a safe bet to utilize Spot for 70% of your stateless workload, provided you have at least 10 total instances. Since the Spot price usually won't spike in all AZs at once, you should be good to go. However, the percentage of Spot instances you employ in your workload is definitely application and task-specific. Make your own decision on how much Spot to supplement your scalable environment with.</p>

<p>There is one exception though -- feel free to use 100% Spot instances for any stateless app that is not meant for production -- development and test servers are absolutely fine for Spot, as long as you're OK with suffering a bit of downtime every once in a while if the Spot price peaks.</p>

<h3 id="bidding">Bidding</h3>

<p>How do you know how much to bid for each Spot instance? A good practice is to simply bid 100% of the on-demand price. That way, you'll never pay more than if the instance was a standard on-demand, and you won't be charged excessively when the Spot price spikes uncontrollably. The best part is that you don't pay your max bid price, but instead, you pay the current Spot price. If you set your max bid price to 100% of the on-demand price, you have absolutely nothing to lose when using Spot!</p>

<p>You can then simply replace terminating Spot instances with on-demands and suffer no blow to your wallet, and hopefully with no noticeable service degradation, provided you supplemented correctly.</p>

<h3 id="supplementationexample">Supplementation Example</h3>

<p>A backend API service running on 20 on-demand <code>m4.large</code> instances could instead be running on 10 on-demand and 10 Spot instances, setting the maximum bid price at 100% the on-demand price. You could then set up a monitoring service to automatically increase the number of on-demand instances when Spot instances get terminated, and terminate the on-demand when the Spots are back up.</p>

<h3 id="spotisntforeverything">Spot Isn't for Everything</h3>

<p>Note that you shouldn't be using Spot for running any sensitive workloads. If it isn't clear enough already, you should <strong>NOT</strong> run your databases on Spot. Spot instances are meant for workloads which do not persist any sensitive data to a local disk, since they may shut down abruptly as the Spot price increases past your bid price. They're perfect as web servers, API backends, Hadoop, etc. Any kind of workload that can be interrupted and replaced by an on-demand instance without any need for backup and restore. Basically, any stateless application or task that can be interrupted safely.</p>

<h3 id="spotblocks">Spot Blocks</h3>

<p>AWS also provides an option to guarantee uptime of up to 6 hours for your Spot instance, in exchange for a higher Spot price. You won't be affected by a price spike, but you'll be paying a bit more for each instance hour. And your instance will always be terminated after 6 hours. </p>

<p>You may prefer this type of Spot instance if you have some one-time task that you need less than 6 hours to finish and don't want it interrupted.</p>

<h2 id="usingspot">Using Spot</h2>

<p>There are several ways to utilize Spot instances:</p>

<ul>
<li>Manually request them using the <a href="https://console.aws.amazon.com/ec2sp/v1/spot/dashboard">Spot Requests console</a> which is not recommended as there is no resiliency here -- once the Spot price exceeds your bid price, your instances will shut down and AWS will not restore them after the price goes back down</li>
<li>Request a "Spot Fleet" in the <a href="https://console.aws.amazon.com/ec2sp/v1/spot/launch-wizard?region=us-east-1">Spot Requests console</a> to attempt to maintain your target capacity by relaunching Spot instances after the price goes down again</li>
<li>Configure an <a href="https://console.aws.amazon.com/ec2/autoscaling/home?region=us-east-1#LaunchConfigurations:">EC2 Launch Configuration</a> to use Spot instances instead of on-demand and hook up an <a href="https://console.aws.amazon.com/ec2/autoscaling/home?region=us-east-1#AutoScalingGroups:">Auto Scaling Group</a> to use your launch configuration (recommended)</li>
<li>Configure an Elastic Beanstalk environment to utilize 100% Spot instances (not recommended except for non-production workloads)</li>
</ul>

<p>I'd recommend going with the Launch Configuration method, as it's the least cluttered. I tend to find the Spot requests console a bit messy to work with. </p>

<p>If you wish to supplement an existing Elastic Load Balancer with Spot instances, simply create another Auto Scaling Group, duplicate its Launch Configuration, modify it to use Spot instances, and finally, attach it to your existing ELB. Your ELB will now forward traffic to both Auto Scaling Groups, and you'll be able to easily increase or decrease the number of desired instances in either Auto Scaling Group to maintain a good on-demand-to-Spot-instances ratio.</p>

<p>That's it! Let me know how you take advantage of Spot instances to save tons of money on your AWS bill in the comments below!</p>]]></content:encoded></item><item><title><![CDATA[Set Up a Service Status Page for Free with Cachet]]></title><description><![CDATA[<p>Any tech company that provides its goods and services over the Internet, whether it be in the form of a dashboard interface or feature-rich API, needs to prepare for the inevitable and unexpected hiccups that plague service uptime.</p>

<h2 id="servicedegradation">Service Degradation</h2>

<p>It manifests itself in various forms, ranging from increased API</p>]]></description><link>https://eladnava.com/set-up-a-service-status-page-for-free-with-cachet/</link><guid isPermaLink="false">8fa070a9-68c9-435a-ba76-21a8678fcff4</guid><category><![CDATA[System Administration]]></category><category><![CDATA[Reliability]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Sun, 05 Jun 2016 20:45:00 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/06/cachet.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/06/cachet.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/06/cachet.jpg" alt="Set Up a Service Status Page for Free with Cachet"><p>Any tech company that provides its goods and services over the Internet, whether it be in the form of a dashboard interface or feature-rich API, needs to prepare for the inevitable and unexpected hiccups that plague service uptime.</p>

<h2 id="servicedegradation">Service Degradation</h2>

<p>It manifests itself in various forms, ranging from increased API response times, elevated API error rates, DNS provider outages, datacenter blackouts, database lockups, DDoS attacks, and so on.</p>

<p>An organization can only do so much to avoid service degradation, but sometimes, life just gets in the way. Whether it be a new employee that somehow managed to push untested code to the main production boxes, or even worse, forgetting to renew HTTPS certificates, these things just happen sometimes.</p>

<h3 id="transparency">Transparency</h3>

<p>The best way for organizations to handle these kinds of situations is with complete transparency. This can easily be achieved by setting up a status page that reflects your service's operational status. You can set up this status page to display uptime metric graphs, any recent incidents, what is being done to resolve them, and an incident history.</p>

<p>There are several paid status page solutions available, but if your budget is tight, luckily, there's <a href="https://cachethq.io/">Cachet</a>.</p>

<blockquote>
  <p>Cachet is a beautiful and powerful open source status page system, a free replacement to services such as StatusPage.io, Status.io and others.</p>
</blockquote>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-12-01-26-AM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>Cachet is indeed free and open-source, and one of the greatest things about it is that it's easy to customize its theme to fit your branding.</p>

<p>Here's a <a href="https://demo.cachethq.io/">demo of Cachet</a>.</p>

<p>The only downside of Cachet is that its installation docs are very lacking. I had to waste nearly 4 hours trying to get it to install successfully. But no matter, I've managed to nail down the perfect installation steps necessary to get it up and running in no time!</p>

<h2 id="settingthingsup">Setting Things Up</h2>

<p>Let's begin by spinning up a new <strong>Ubuntu 14.04 LTS</strong> instance. Choose whichever cloud provider you like most, but do consider its reliability history as this status page should be an always-up, go-to place for when your service goes down. </p>

<p>A good idea is to host this instance in a completely different region than the one with all of your production instances, so that in case an entire availability zone or region goes down, the status page shall prevail.</p>

<h3 id="instanceconfiguration">Instance Configuration</h3>

<p>As for the instance compute power, you can definitely cut costs here -- I'd even suggest going with a <code>t2.nano</code> if you're using AWS, it costs about $4.50 a month.</p>

<p>Make sure to assign a public IP to the server, as well as allow access on port <code>80</code> to all IPs, and on <code>22</code> to your IP address.</p>

<p>Finally, set up an <code>A Record</code> in your domain's DNS record management interface so that <code>status.you.com</code> will point to the server you just created.</p>

<h3 id="installdependencies">Install Dependencies</h3>

<p>First things first, update the package cache:</p>

<pre><code data-language="shell">sudo apt-get update</code></pre>

<p>Let's install some basic requirements for Cachet, including Apache2 and PHP:</p>

<pre><code data-language="shell">sudo apt-get install git curl apache2 php5 libapache2-mod-php5 php5-gd php5-apcu php5-mcrypt php5-sqlite php5-cli</code></pre>

<p>Next, install <a href="https://getcomposer.org/">Composer</a>, a dependency manager for PHP:</p>

<pre><code data-language="shell">curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
</code></pre>

<p>Finally, enable Apache's <code>mod_rewrite</code> as it's required for Cachet to work:</p>

<pre><code data-language="shell">sudo a2enmod rewrite</code></pre>

<h2 id="installcachet">Install Cachet</h2>

<p>Head over to the <code>ubuntu</code> user's home directory:</p>

<pre><code data-language="shell">cd ~</code></pre>

<p>Let's clone Cachet to the server. At the time of writing, the latest stable version of Cachet is <code>v2.2.2</code>.</p>

<p>Check for the latest stable version of Cachet <a href="https://github.com/CachetHQ/Cachet/releases">in the official releases page</a> and in case there's a newer version number, plug it into the following command (instead of <code>v2.2.2</code>):</p>

<pre><code data-language="shell">git clone https://github.com/cachethq/Cachet.git
cd Cachet
git checkout v2.2.2</code></pre>

<p>There is actually a dependency reference error in <code>v2.2.2</code> that Cachet project maintainers have <a href="https://github.com/CachetHQ/Cachet/issues/1749#issuecomment-216905458">not yet fixed</a> in the stable branch. So let's manually fix it by running the following command:</p>

<pre><code data-language="shell">sed -i 's/use Illuminate\\Support\\Facades\\Str;/use Illuminate\\Support\\Str;/g' app/Http/Controllers/Dashboard/SettingsController.php</code></pre>

<h3 id="installphpdependencies">Install PHP Dependencies</h3>

<p>Install all of the Composer depenencies:</p>

<pre><code>composer install --no-dev -o</code></pre>

<h3 id="configurecachet">Configure Cachet</h3>

<p>Create an <code>.env</code> file with the following contents to have Cachet write to a local SQLite database, which is fast and easiest to configure:</p>

<pre><code data-language="shell">nano .env</code></pre>

<p>Paste the following contents inside:</p>

<pre><code>APP_ENV=production
APP_DEBUG=false
APP_URL=http://status.you.com
APP_KEY=

DB_DRIVER=sqlite

CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
CACHET_EMOJI=false

MAIL_DRIVER=smtp
MAIL_HOST=null
MAIL_PORT=null
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ADDRESS=null
MAIL_NAME=null
MAIL_ENCRYPTION=tls

REDIS_HOST=null
REDIS_DATABASE=null
REDIS_PORT=null

GITHUB_TOKEN=null</code></pre>

<p><strong>Note:</strong> Make sure to replace <code>status.you.com</code> with the DNS hostname you set up earlier.</p>

<p>Run the following commands to prepare for the installation:</p>

<pre><code data-language="shell">php artisan key:generate
php artisan config:clear
touch ./database/database.sqlite
chmod -R 777 .env storage database bootstrap/cache</code></pre>

<p>Finally, run the following command to install Cachet:</p>

<pre><code data-language="shell">php artisan app:install</code></pre>

<p>If all goes well, great! </p>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-12-31-43-PM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>If not, check <code>Cachet/storage/logs/laravel-YYYY-MM-DD.log</code> for the stack trace. It's most likely a permission issue.</p>

<p>Now all that's left is to set up Apache to serve Cachet's <code>public/</code> folder.</p>

<h3 id="configureapache">Configure Apache</h3>

<p>Run the following commands to link <code>/var/www/html</code> to Cachet's <code>public/</code> directory:</p>

<pre><code data-language="shell">sudo mv /var/www/html /var/www/html-old
sudo ln -s /home/ubuntu/Cachet/public /var/www/html</code></pre>

<p>Let's permit <code>.htaccess</code> directives by editing the Apache2 configuration:</p>

<pre><code data-language="shell">sudo nano /etc/apache2/sites-available/000-default.conf</code></pre>

<p>Find:</p>

<pre><code>DocumentRoot /var/www/html</code></pre>

<p>Add below:</p>

<pre><code>&#x3C;Directory /var/www/html&#x3E;
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
&#x3C;/Directory&#x3E;</code></pre>

<p>Finally, restart Apache2 for changes to take effect:</p>

<pre><code data-language="shell">sudo service apache2 restart</code></pre>

<h3 id="setupwizard">Setup Wizard</h3>

<p>Open a web browser and head over to your status page hostname. You should see the following setup screen:</p>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-2-09-30-AM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>Leave the default cache/session drivers as-is and click <strong>Next</strong>. </p>

<p>On the next page, you'll be asked to enter some details about the status page, such as its name, URL, time zone, language, etc. </p>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-12-34-21-PM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>On the third setup page, you'll be asked to set up an administrator account for managing the status page.</p>

<p>When you're done filling everything in, click <strong>Go to dashboard</strong>. You'll then be presented with this marvelous login screen:</p>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-2-14-01-AM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>Within the dashboard, you'll be able to add service components (e.g. API / DB / Website / etc), incidents, metrics (graphs), and much more. </p>

<p><img src="https://eladnava.com/content/images/2016/06/Screen-Shot-2016-06-06-at-2-20-03-AM.png" alt="Set Up a Service Status Page for Free with Cachet"></p>

<p>To set up HTTPS for your status page, follow <a href="https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04">this awesome tutorial by DigitalOcean</a> to set up Apache with a free Let's Encrypt certificate.</p>

<p>If you want your customers to be able to subscribe to e-mail alerts, check out the <a href="https://docs.cachethq.io/docs/configuring-mail">Cachet e-mail setup docs</a>.</p>

<p>Well done, you've successfully set up a status page for your service! Now, it's up to you to set up mechanisms to automatically update this status page with <a href="https://docs.cachethq.io/docs/incidents">new incidents</a> or <a href="https://docs.cachethq.io/docs/post-metric-points">updated metric data</a>, using the Cachet API. </p>]]></content:encoded></item><item><title><![CDATA[Generate Responsive Transactional E-mail with Mailgen]]></title><description><![CDATA[<p>There's just no way around it. Whether it's sending your users welcome e-mails, password reset requests, purchase receipts, or billing reminders, almost any web-based service eventually needs to start sending transactional e-mail to its users.</p>

<p>Usually, this is a daunting task, mainly because sending responsive transactional e-mail is actually not</p>]]></description><link>https://eladnava.com/generate-responsive-transactional-e-mail-with-mailgen/</link><guid isPermaLink="false">46e05677-7584-4434-80a2-eade38f1336b</guid><category><![CDATA[Node.js]]></category><category><![CDATA[User Experience]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Thu, 26 May 2016 22:36:38 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/05/Inbox-by-Gmail.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/05/Inbox-by-Gmail.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/05/Inbox-by-Gmail.jpg" alt="Generate Responsive Transactional E-mail with Mailgen"><p>There's just no way around it. Whether it's sending your users welcome e-mails, password reset requests, purchase receipts, or billing reminders, almost any web-based service eventually needs to start sending transactional e-mail to its users.</p>

<p>Usually, this is a daunting task, mainly because sending responsive transactional e-mail is actually not so easy to pull off:</p>

<ul>
<li>You need to build or import an e-mail template and inject your text into it</li>
<li>You need to prepare a <a href="https://litmus.com/blog/best-practices-for-plain-text-emails-a-look-at-why-theyre-important">plaintext version</a> of the e-mail to send along with the HTML e-mail</li>
<li>You need to inline the CSS in the template (can't use the <code>&lt;style&gt;</code> tag when sending e-mails)</li>
<li>You need to make sure the template is responsive (as most of the time people will actually be <a href="http://www.emailmonday.com/mobile-email-usage-statistics">reading it on their mobile device</a>)</li>
</ul>

<p>And usually this will end up cluttering your JavaScript code with HTML and <code>&lt;br /&gt;</code> statements all over the place, not to mention wasting time that could have been spent building your actual product.</p>

<p>But not anymore. </p>

<h1 id="mailgen">Mailgen</h1>

<p><a href="https://github.com/eladnava/mailgen">Mailgen</a> is a Node.js package I built that generates clean, responsive HTML e-mails for you, without having to get down and dirty with the actual templating, inlining, responsiveness, etc.</p>

<blockquote>
  <p>Programmatically create beautiful e-mails using plain old JavaScript.</p>
</blockquote>

<p>Simply pass in your text, and <code>mailgen</code> will do the rest.</p>

<p>This simple code:</p>

<pre><code data-language="javascript">var Mailgen = require('mailgen');

// Configure mailgen by setting a theme and your product info
var mailGenerator = new Mailgen({
    theme: 'default',
    product: {
        // Appears in header & footer of e-mails
        name: 'Mailgen',
        link: 'https://mailgen.js/'
        // Optional product logo
        // logo: 'https://mailgen.js/img/logo.png'
    }
});

// Prepare email contents
var email = {
    body: {
        name: 'John Appleseed',
        intro: 'Welcome to Mailgen! We’re very excited to have you on board.',
        action: {
            instructions: 'To get started with Mailgen, please click here:',
            button: {
                color: 'green',
                text: 'Confirm Your Account',
                link: 'https://mailgen.js/confirm?s=d9729feb74992cc3482b350163a1a010'
            }
        },
        outro: 'Need help, or have questions? Just reply to this email, we\'d love to help.'
    }
};

// Generate an HTML email using mailgen
var emailBody = mailGenerator.generate(email);</code></pre>

<p>Generates this awesome e-mail:</p>

<p><img src="https://eladnava.com/content/images/2016/05/68747470733a2f2f7261772e6769746875622e636f6d2f656c61646e6176612f6d61696c67656e2f6d61737465722f73637265656e73686f74732f64656661756c742f77656c636f6d652e706e67.png" alt="Generate Responsive Transactional E-mail with Mailgen"></p>

<p>You can customize the generated e-mails by entering your product name and logo, as well as providing a custom theme file to be used instead of the <code>default</code> theme. I intend to add at least 5 more built-in themes to <code>mailgen</code> to make things awesome right out of the box.</p>

<p>Interesting enough, two days after I published <code>mailgen</code>, it had already been <strong>downloaded 1,000+ times on npm</strong> and <strong>starred 200+ times on GitHub</strong>. It exploded so fast I was caught off guard and had to stop everything I was doing to take care of the feature requests piling up. But hey, who am I to complain, finally one of my open-source projects gets a crazy amount of attention!</p>

<p>What do you think about <code>mailgen</code>? Have any ideas on how to improve it? Let me know in the comments below or by <a href="https://github.com/eladnava/mailgen/issues/new">opening a GitHub issue</a>!</p>]]></content:encoded></item><item><title><![CDATA[Publishing Your First Package to npm]]></title><description><![CDATA[<p><a href="https://www.npmjs.com">npm</a>, a.k.a. the Node Package Manager, is a developer-friendly command-line package manager included with Node.js. It makes it super-easy to install other people's JavaScript packages to extend your projects as well as publish your own JavaScript code with the world.</p>

<p>That exciting feeling you get when you</p>]]></description><link>https://eladnava.com/publishing-your-first-package-to-npm/</link><guid isPermaLink="false">17892a90-cb7a-4d8b-98da-5f4b9c6dd60e</guid><category><![CDATA[Node.js]]></category><category><![CDATA[npm]]></category><dc:creator><![CDATA[Elad Nava]]></dc:creator><pubDate>Thu, 19 May 2016 21:50:39 GMT</pubDate><media:content url="https://eladnava.com/content/images/2016/05/npm-1.jpg" medium="image"/><enclosure length="0" url="https://eladnava.com/content/images/2016/05/npm-1.jpg" type="image/jpeg"/><content:encoded><![CDATA[<img src="https://eladnava.com/content/images/2016/05/npm-1.jpg" alt="Publishing Your First Package to npm"><p><a href="https://www.npmjs.com">npm</a>, a.k.a. the Node Package Manager, is a developer-friendly command-line package manager included with Node.js. It makes it super-easy to install other people's JavaScript packages to extend your projects as well as publish your own JavaScript code with the world.</p>

<p>That exciting feeling you get when you publish an open-source project on GitHub? This one's even better, because npm makes it dead-simple for people to use your package. It's only an <code>npm install</code> away. And if your package does something useful, people will find it without you having to spread the word about it -- developers actively search <a href="https://npmjs.com/">npmjs.com</a> for packages all the time to instantly add worlds of functionality to their apps. </p>

<p><img src="https://eladnava.com/content/images/2016/05/Screen-Shot-2016-05-20-at-12-32-20-AM.png" alt="Publishing Your First Package to npm"></p>

<p>Due to the magnitude of npm, some critics even go so far as to claim that Node.js developers simply <code>npm install</code> whatever they need their app to do and never write a single line of code themselves, and that's actually not so far-fetched as it sounds, nor is it a bad thing. Why not build upon the experience of other developers that were faced with the same exact task at hand?</p>

<h3 id="anintroductiontonpm">An Introduction to npm</h3>

<p>Skip this if you're familiar with how npm works.</p>

<p>The npm CLI works by reading and writing to a file called <code>package.json</code> within your project's root which looks similar to this:</p>

<pre><code data-language="javascript">{
  "name": "my-cool-package",
  "version": "1.0.0",
  "description": "A cool package for demonstration purposes",
  "main": "index.js",
  "author": "Elad Nava &lt;eladnava@gmail.com&gt;",
  "license": "Apache-2.0",
  "dependencies": {
    "express": "^4.13.4",
    "mongoose": "^4.4.17"
  }
}</code></pre>

<p>Install an npm package by running the following command within your project's root directory:</p>

<pre><code data-language="shell">npm install express --save</code></pre>

<p>npm will then fetch the popular <a href="https://www.npmjs.com/package/express">Express</a> package and extract it into the <code>node_modules/</code> folder within your project's root directory.</p>

<blockquote>
  <p>The <code>--save</code> argument instructs npm to add the package and its installed version to your project's <code>package.json</code>.</p>
</blockquote>

<p>When other people want to work on your project, they simply clone it to their workstation and run:</p>

<pre><code data-language="shell">npm install</code></pre>

<p>All of the dependencies listed within your <code>package.json</code> will then be automatically installed to their local machine.</p>

<h3 id="apackageforeverything">A Package for Everything</h3>

<p>The coolest thing about npm is that there's a package for almost everything. It is, by far, the <a href="https://nodesource.com/blog/npm-is-massive/">largest package manager</a> for any development language. Simply <a href="https://www.npmjs.com/search?q=example">search npm</a> for the functionality you're looking to add to your project, and there's a pretty good chance you'll find a package that does exactly what you need!</p>

<p>But sometimes, you'll be working on something so unique that you won't be able to find anything like it on npm. And since you love open source and want to give back to this amazing community, you'll want to publish your awesome package and give it a catchy name that will be easy to remember and to <code>npm install</code>. But how do you go about doing that?</p>

<h2 id="publishingyourfirstpackage">Publishing Your First Package</h2>

<p>It's actually easier than it sounds. Let's get to it!</p>

<h3 id="startingout">Starting Out</h3>

<p>Let's begin by naming your package. You'll want to pick a name that is both self-explanatory and easy to remember. Some of the <a href="https://www.npmjs.com/browse/star">most popular npm packages</a> tend to go for English words: </p>

<ul>
<li>express</li>
<li>request</li>
<li>async</li>
<li>forever</li>
<li>underscore</li>
</ul>

<p>And the list goes on... But it's definitely not mandatory to pick an English word. Just make sure that the name corresponds to the functionality of your package, as that will help people find it with ease.</p>

<p>One thing to note here: npm doesn't scope your packages to your account like GitHub scopes your repositories. This means that you should double-check that someone didn't already publish a package with the desired name by plugging it into this URL: <br>
<a href="https://www.npmjs.com/package/package-name-goes-here">https://www.npmjs.com/package/package-name-goes-here</a></p>

<p><img src="https://eladnava.com/content/images/2016/05/Screen-Shot-2016-05-19-at-11-05-51-PM.png" alt="Publishing Your First Package to npm"></p>

<p>You gotta love npm for that 404 page. =) If you are presented with a different page, it probably means that the package name is already taken up by someone else.</p>

<h3 id="functionality">Functionality</h3>

<p>It's time to think about the core functionality of your package. For the sake of this example, let's build a simple package called <strong>is-null-or-empty</strong>, which will receive a string and return <code>true</code> if it's null, undefined, or empty, or <code>false</code> in all other cases. Feel free to substitute the package name and logic with your own idea.</p>

<p>This is how the package would be used:</p>

<pre><code data-language="shell">var isNullOrEmpty = require('is-null-or-empty');

console.log(isNullOrEmpty("")); // true
console.log(isNullOrEmpty(null)); // true
console.log(isNullOrEmpty(undefined)); // true

console.log(isNullOrEmpty("Hello World")); // false</code></pre>

<p>Pretty simple package, right? That's perfectly fine. Writing tiny packages that do just one thing is a practice known as <a href="https://medium.freecodecamp.com/in-defense-of-hyper-modular-javascript-33934c79e113">Hyper Modular JavaScript</a>, and it encourages the use of tiny packages that do one thing well to accomplish a more complex goal, as well as <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRYing up our code</a>.</p>

<p>The name I chose for the package is a little long, but you can definitely understand what it's supposed to do when it's named so verbosely. Sometimes our packages do something so specific that it would make sense to name them this way, so that people will find them easily and understand their purpose instantly.</p>

<h3 id="sourcecontrol">Source Control</h3>

<p>Alright, now that we've got both a name and an idea for your package, let's begin to write the actual code.</p>

<p>First things first -- we should set up source control. <a href="https://github.com/">GitHub</a> is recommended for this as npm integrates nicely with it, as we'll see later. </p>

<p>Create a new repository called <code>is-null-or-empty</code> under your GitHub account by visiting: <br>
<a href="https://github.com/new">https://github.com/new</a></p>

<p>Next, open up a terminal and run the following to create a directory for your package:</p>

<pre><code>mkdir is-null-or-empty  
cd is-null-or-empty  
</code></pre>

<p>Once inside, let's initialize a local git repository for your package, as well as hook up a remote origin to it (the GitHub repo you just created).</p>

<pre><code>git init .  
git remote add origin https://github.com/{your-username}/is-null-or-empty.git  
</code></pre>

<p>Make sure to replace <code>{your-username}</code> with your GitHub username.</p>

<h3 id="packagemetadata">Package Metadata</h3>

<p>Next, let's create a basic <code>package.json</code> file by running the following:</p>

<pre><code>npm init  
</code></pre>

<p>npm will now ask us to input some details about the project. You can pretty much press <code>Enter</code> all the way through, but make sure to fill in the following fields:</p>

<ul>
<li><strong>Author</strong>: First Last &lt;email@provider.com&gt;</li>
<li><strong>Description</strong>: Checks whether a given string is null or empty.</li>
<li><strong>License</strong> : <code>MIT</code> / <code>GPL-3.0</code> / <code>Apache-2.0</code> (Check out <a href="http://choosealicense.com">http://choosealicense.com</a> for help with this)</li>
</ul>

<p>For the rest of the options, you can leave the default values as they are. </p>

<p>Notice how npm will automatically detect the GitHub repository you configured in the previous step. It will link to your repository in the npm package listing page for people to view the package source and help contribute to your package.</p>

<p>Finally, enter <code>yes</code> to write the <code>package.json</code> file to the disk.</p>

<p><img src="https://eladnava.com/content/images/2016/05/Screen-Shot-2016-05-20-at-12-34-27-AM.png" alt="Publishing Your First Package to npm"></p>

<h3 id="writethecode">Write the Code</h3>

<p>Finally, the part you've been waiting for! Writing your package's oh-so-complicated logic.</p>

<p>We need to create a file called <code>index.js</code> as that is your package's configured entry point (specified using the <code>main</code> property in your <code>package.json</code>). When others require your package, this is the file that will be run first.</p>

<p>Open up your favorite JavaScript IDE (give <a href="https://www.visualstudio.com/en-us/products/code-vs.aspx">VS Code</a> a try!) and create a new file called <code>index.js</code>, pasting the following inside it:</p>

<pre><code data-language="javascript">// Main package function
function isNullOrEmpty(input) {
    // Returns true if the input is either undefined, null, or empty, false otherwise
    return (input === undefined || input === null || input === '');
}

// Make the main function available to other packages that require us
module.exports = isNullOrEmpty;</code></pre>

<p>Notice that <code>module.exports</code> declaration. We must explicitly tell Node.js what methods we want to make accessible to others by using this declaration. Without it, no one will be able to access the <code>isNullOrEmpty()</code> function we defined inside.</p>

<h3 id="writeanexamplefile">Write an Example File</h3>

<p>The best way to demonstrate how to use your package is to write an example script that makes use of it, documenting the return values of your package. Add the following <code>example.js</code> to your project with the following content:</p>

<pre><code data-language="javascript">// Change './index' to 'is-null-or-empty' if you use this code outside of this package
var isNullOrEmpty = require('./index');

console.log(isNullOrEmpty("")); // true
console.log(isNullOrEmpty(null)); // true
console.log(isNullOrEmpty(undefined)); // true

console.log(isNullOrEmpty("Hello World")); // false</code></pre>

<p>Notice how the require statement calls for <code>./index</code> instead of <code>is-null-or-empty</code>. In order to require your own package within your package's code, we have to directly reference its entry point, which is <code>index.js</code>, by writing <code>require('./index')</code>. Anyone else that installs your package will be able to <code>require('is-null-or-empty')</code> as expected.</p>

<h3 id="testitout">Test It Out</h3>

<p>Now it's time to make sure your package actually works! Run the <code>example.js</code> file with an IDE of your choice or simply by invoking <code>node</code>:</p>

<pre><code data-language="shell">node example.js</code></pre>

<p>As expected, here's the output:</p>

<p><img src="https://eladnava.com/content/images/2016/05/Screen-Shot-2016-05-19-at-11-36-40-PM.png" alt="Publishing Your First Package to npm"></p>

<h3 id="documentation">Documentation</h3>

<p>This one's actually easier than you think. Simply create a <code>README.md</code> file within your project with the following <a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet">Markdown-styled</a> content for a basic Node.js package documentation:</p>

<pre><code data-language="readme"># is-null-or-empty

A Node.js package that checks whether a given string is null or empty. A basic package for an npm publish tutorial.

## Usage

First, install the package using npm:

    npm install is-null-or-empty --save

Then, require the package and use it like so:

    var isNullOrEmpty = require('is-null-or-empty');

    console.log(isNullOrEmpty("")); // true
    console.log(isNullOrEmpty(null)); // true
    console.log(isNullOrEmpty(undefined)); // true

    console.log(isNullOrEmpty("Hello World")); // false

## License

Apache 2.0</code></pre>

<p>The <code>README.md</code> is crucial -- almost no one will bother to sift through your package code to attempt to understand how to actually use it, especially if it's a large and complicated package.</p>

<p>Commit everything into your git repo and push it up to GitHub:</p>

<pre><code data-language="shell">git add index.js package.json example.js README.md
git commit -m "Initial commit"
git push origin master</code></pre>

<h3 id="publishthepackage">Publish the Package!</h3>

<p>And now, the moment you've been waiting for -- publishing your package for the whole world to see!</p>

<p>If this is indeed your first package, you'll need to register on npm by running <code>npm adduser</code>. If you're already registered on npm, use <code>npm login</code> instead.</p>

<p>Here comes the big one:</p>

<pre><code data-language="shell">npm publish</code></pre>

<p>This one might take a few seconds, but once it's done -- so are you!</p>

<p>Congratulations! You just published your first npm package like a boss! Go ahead and run <code>npm install is-null-or-empty</code> and attempt to use your package in another project! =)</p>]]></content:encoded></item></channel></rss>