Author: Michael (page 1 of 2)

Setup a network share via Samba on your Raspberry Pi

Over this past weekend,  I finally setup a network share via Samba on my Raspberry Pi with an old external USB hard drive I had laying around. My RetroPie installation already serves up a Samba share – so my goal was to throw an additional folder in there that mounts to an external drive. After a bit of trial and error, here’s how I pulled it off.

Step one was to format my drive to the ext4 filesystem. I ready varying opinions on which filesystem is recommended for this procedure, and ext4 seemed to be a good choice in the end. While there are ways to format your drive directly via the CLI  – I decided to use a trial of ExtFS for Mac and it was very easy.

Next, I plugged my external drive into the RPi and connected over SSH. Once you’re connected, run the following command:

sudo fdisk -l

Now, look towards the bottom and assuming this is the only additional drive you have plugged in, you should see something like this:

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1  *        2 975400959 975400958 465.1G 83 Linu

Take note of the part in bold. This is the name of the partition on our external drive.

Next we’re going to create a directory to mount our drive into, and also another directory within that one. The reason for the second is that I want to avoid seeing the lost+found folder on the ext4 partition we created. You can change the parts in bold to be whatever you’d like.

sudo mkdir /media/USBHDD
sudo mkdir /media/USBHDD/share

After that, we want to ensure that we have the proper access to write to the directory.

sudo chmod -R 777 /media/UDBHDD/share

Next, we want to mount our external drive into that new directory.

sudo mount -t auto /dev/sda1 /media/USBHDD

Now we’ll need to update our Samba config. If you’re already running RetroPie, you’ve already got Samba installed. If not, you may need to run the following command.

sudo apt-get install samba samba-common-bin

Before you edit your Samba config, make a quick backup copy of the current file.

sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.old

Now, jump into the config file.

sudo nano /etc/samba/smb.conf

We’re going to want to jump straight to the bottom of this file – so if you’re on a Mac just hit fn and the down arrow a few times.

Once you get to the bottom, you should see a list of familiar folders that RetroPie already shares (roms, bios, configs, and splashscreens). Create another sections just below the last that looks like this:

comment = Share
path = "/media/USBHDD/share"
writeable = yes
guest ok = yes
create mask = 0660
directory mask = 0771
force user = pi

Feel free to customize the parts in bold, and make sure your path matches the one you created earlier.

And finally, you’ll want to restart your Samba daemons.

sudo /etc/init.d/samba restart

At this point you should be able to read and write to your Samba share via Finder by clicking on retropie under the Shared heading and then accessing your new folder called share.

The final step we’ll want to do is edit our fstab configuration so that our drive will properly mount whenever our Raspberry Pi reboots.

sudo nano /etc/fstab

Add the following line to the bottom of the config file (making sure to match the values you’ve used previously).

/dev/sda1 /media/USBHDD auto noatime 0 0

And now we’re done. Enjoy your new network share drive. Personally I’ve hooked mine into every device on my network that can run Kodi for a personal media library accessible throughout the home.

Using Reduce – Going beyond addition

Today I had the opportunity to do a lightning talk of sorts on a development topic of my choosing. I decided to cover Javascript’s reduce method – as I know many who only know reduce as a way to add, subtract, multiply, or divide an array. To make it a bit more interesting, I tied it off with a sample of using NASA’s API to calculate how many Near Earth Objects missed our planet today. Here’s a dump of my slides, and some notes for each.

To follow along with this section of slides, you can point your browser to

We can call this function whatever we would like, and we can name its parameters whatever we would like also. But, to stick to the original example and help show what is going on, I’m calling the parameters accumulator and currentValue again.

So, every time this callback is ran, we are going to return the value of the accumulator (essentially the running total) plus the value of the current item our array is at.

Now that we have our callback function built, reducing that array of numbers is as simple as calling the reduce method on it, and passing in our callback. In this example, since we haven’t passed in an optional initial value, our addNumbers function will only be ran four times – one less than the total length of our array.

Why is that?

Well, the first time the function is ran, it will set the accumulator to the first value in our array. Since we didn’t provide an initial value, the reduce method on our very first run sets the current running total (or, the accumulator) to the first item, and then it’s going to run the function with the second number as our current value.

So here, we return 3 as our new running total – or our current accumulator.

Now we are on run two out of four. We’re taking the third item in our array and adding it to our current running total again. After this run, we’ll be passing 6 back as the current running total.

Now we are on run three out of four. We’re taking the fourth item in our array and adding it to our current running total again. After this run, we’ll be passing 10 back as the current running total.

Now we are on run four out of four. We’re taking the fifth item in our array and adding it to our current running total again. After this run, we’ll be passing 15 back as the current running total.

Now we have gone through every number in the array and successfully reduced it down.

What would this look like if we passed a starting value as the second argument into our reduce call? Let’s hop back over to our REPL and see the difference.

To follow along, head over to

Here, we actually run the .reduce() method for the full length of our array. On the first run, our accumulator (or, running total) is set as the starting value – and this time the number 1 begins as the first currentValue instead.

How can we use reduce to accomplish more complex tasks? The trick is the use of the starting value argument.

You can follow along with this one at

So here we have an array filled with student’s votes for their favorite colors. Without manually going through each item and tallying the votes, how can we use reduce to simplify this task?

Here on run #1, we can see the accumulator is an empty object, because that’s what we passed in as our startingValue.

The currentValue is the first item in the array, so for our first run, it will be the color red.

Inside of our combineFavorites function, we can see the if/else statement checking whether or not a key of ‘red’ exists on the accumulator object. Since the object is empty at this point, it definitely does not exist, so it will result in undefined. Therefore, the if part of the if/else statement will run, and we will create a key of ‘red’ in our empty object and set the value to 1.

Now comes the next important part – just like in our example with addition, we return the accumulator, or the running total. But this time, we now have returned an object that no longer is empty.

Here on run #2, we can see the accumulator is no longer an empty object.

Our currentValue is now orange, so this will also result in undefined from our if/else block – resulting in the ‘orange’ key adding to the object with a value of 1.

Here on run #3, we can see our if/else statement no longer returns undefined, but rather the value of the key ‘red’ on the accumulator.

So, we increment the value by one, and then continue on…

With this reduce function, we were able to create an object with the tally of votes. Now, this data is much easier to use than it was previously as an array. We reduced the array into a more usable piece of data.

How can I use the .reduce() method in web development?

There are many ways, but one important example is when connecting to APIs. APIs will often give you much more data than you may really need. For instance, this is a call to NASA’s API to get information about current Near Earth Objects, or NEOs.

What if we wanted to build a web application that utilized only a small bit of this data? Reduce to the rescue!

Head over to to see this in action.

Build an Image Search Microservice with the Imgur API

It’s been a while since I’ve had time to touch Free Code Camp backend projects, but during my week off I decided to build out the next API project I had yet to attempt, the Image Search Abstraction Layer microservice. I love building and working with APIs, so this was a fun project to build. Let’s get started!

First, let’s initialize our git repository and jump into it.
git init image-search-microservice
cd image-search-microservice

Now, we’ll want to setup our project’s package.json. If you haven’t tested out Yarn yet, I highly recommend giving it a try.
yarn init

Speaking of which, make sure you’ve updated your copy of Node for this tutorial, as I’ll be using some ES6.

Whether you’re using npm or yarn, you can skip through most of the default settings. Personally though, I’m changing my entry point to app.js.

Next, let’s set up our .gitignore to ignore the node_modules folder and our .env file, which we’ll introduce a little later. Drop these two lines in your terminal to do this quickly.
echo node_modules >> .gitignore
echo .env >> .gitignore

Now let’s install the node modules we’ll need for this project. The list is pretty short for this one:
yarn add express mongoose request
yarn add dotenv --dev

Go ahead and create your app.js and setup the basic scaffolding for an express app.

const express = require('express');

const app = express();

const port = process.env.PORT || 3000;
const server = app.listen(port, function() {
    console.log(`Server listening on port ${port}`);

At this point, you should be able to run nodemon (assuming you have it installed globally) in your terminal and see your server running on port 3000. If not, go ahead and compare your project files.

I’m all about writing modular code, so we’re going to set up some folders and files like so:

First we’ll set up the routing. Add const routes = require('./routes/index');
and app.use('/', routes); in our app.js to be used as middleware. Then, setup your routes/index.js like so:

const express = require('express');
const router = express.Router();

router.get('/', (req, res) => {

router.get('/latest', (req, res) => {

router.get('/search/:q', (req, res) => {

module.exports = router;

Now, we’ve moved the job of routing out of our app.js. The advantage to doing this is that if our app were to grow, we could add additional route files and keep our project modular and clean. I’ve also setup the basic routing our application will need: a root endpoint where we can eventually display a landing page if we so choose, a /latest endpoint where our app will display the ten latest searches, and a /search/:q endpoint where we can pass a string in to search.

Next, head over to the services/imgur.js file and drop this in:

const request = require('request');

exports.getImage = function(search, page = 1) {
  return new Promise((resolve, reject) => {

We’ll be making a request to the Imgur API through our app, and the request module makes that really simple. Also, we’re going to be calling this function back in our route file, so I’ve set it up as an exported function that takes two parameters. The first is the search term, the second is the pagination option which we can default to ‘1’ if nothing is passed. And finally, I’m a big fan of promises over callbacks, so I’ve set this function to return a promise.

Now, per the documentation, the request module can be invoked with two parameters – an options object and a callback. In our options object, we’ll need to pass in our unique client ID from Imgur. Swing over to if you haven’t done so already and register your application. You can choose ‘Anonymous usage without user authorization’ when prompted.

Now that we’ve got our code, we can build out an options object within our promise statement like so:

let options = {
      url: `${page}?q=${search}`,
      headers: { Authorization: 'Client-ID kfbr392kfbr392' },
      json: true,

In here, we’re setting up the URL that we’ll be connecting to Imgur with. Here, we’ve used ES6 template strings to cleanly drop in the page and search parameters from our getImage function. Next, we set the headers as requested by Imgur to allow proper authorization of our API call (but, make sure to replace my ‘Client-ID with your own). And finally, we’ve specified that we want our response to be in JSON format.

Next, we’ll build out our callback. First, we can build a function called getPics that will take three parameters: error, response, and body. I’ll then make a quick error check inside the body of the function that ensures no errors occurred and the response received from Imgur has a status code of 200. Then, we want to take the body that was returned, and filter out items in the array that are albums. We’ll do this to best mirror the demo app shared by Free Code Camp, as we’ll want to provide a response with both a direct image link and a link to the image’s context – and unfortunately this wouldn’t work well if we included albums.

Then, after we’ve filtered out albums, we then want to map over the response and cut out all the extra information we’re not using. All we want to return is a url, a snippet, and the context. Finally, we’re going to resolve our promise with our newly transformed data, jump out of the callback and then setup our request function with its two newly created parameters. In the end, it will look like this:

function getPics(err, response, body) {
  if (!err && response.statusCode == 200) {
    body = => {
      if (!image.is_album) {
        return image;
    }).map(image => {
      return {
        snippet: image.title,
        context: `${}`

We’re almost ready to test it out, but first we have to jump back into our routes file and make sure to require our new imgur service file const imgur = require('../services/imgur');
and to call it in our /search/:q endpoint.

Here, we’ll pass the ‘q’ parameters as the search string of the function, and then we’ll let our second parameter req.query.offset handle the optional ?offset=42 flag. Then, once that promise returns, we’re going to output the response to our browser as JSON.

imgur.getImage(req.params.q, req.query.offset).then(ans => {

Now fire up nodemon and give it a test. If all is working so far, you should be able to search Imgur by pointing your browser to localhost:3000/search/puppies. If not, do some debugging or a quick code comparison at this point.

Now that this piece is working, all that we have left to add is the history component. I personally enjoy working with mLab in development just as much as in production, so I’m going to proceed with this tutorial as such. If you’d prefer to run MongoDB on your local machine prior to deployment, or you want to learn more about how to get setup on mLab – checkout my previous tutorial on building a URL shortener for more information.

The first thing we’re going to want to do is setup our environment variables. This way, we’ll able to connect to our mLab database in either production or development, without worrying about sharing our credentials on Github. In your app.js file, drop in the following at the very top:

if (process.env.NODE_ENV !== 'production') {

Here, we’re setting up our server to require the dotenv package if we’re not running in a production environment. Later when we push this up to Heroku, their platform will automatically set an environment variable of NODE_ENV as ‘production’, but on our development machines – we won’t worry about that.

Next you’ll need to create a .env file in the root of your project. This file will then assign your environment variables every time you spin up your server. In that, drop in the following as it relates to the mLab database you set up:

Now that we have our environment variables setup, we can dive into setting up our connection to the database. Back in your app.js, you’ll want to require your /config/db.js file with a const db = require(‘./config/db’);

Then, jump into your config/db.js file and drop in the following:

const connection = `mongodb://${process.env.DB_HOST}/${process.env.DB_NAME}`;
const mongoose = require('mongoose');
mongoose.Promise = global.Promise;

exports.db = mongoose.connect(connection);

Here, we are setting up a connection to our mongo database and then using mongoose as an ORM for interacting with it. Also, I’ll be using some promises, which were recently deprecated in mongoose, so I’ve set the native promise library to be used instead. And finally, we export the connection for use elsewhere in our application.

Next, in our models/history.js file, I want to set up a schema for our database. First, we’ll need to require an instance of mongoose. Next, we want to setup a schema for our database. This way, when we pass data into the model later in our routes, we’ll be able to let MongoDB know what we want each new document to look like, and specifically in this case – to always attach a timestamp to each new entry. After that, we export our model for use.

const mongoose = require('mongoose');

const historySchema = new mongoose.Schema({
 term: String,
 when: { type: Date, default: }

const History = mongoose.model('History', historySchema);
module.exports = History;

At this point, our database is setup and we’re ready to start building out the queries we’ll need.

First, let’s make sure to require our new model in our router with const History = require('../models/history');

Now, we want to add a new entry into our database every time a search is made. Right before we send back a response with the results, drop in the following to add that search query into the database: new History({ term: req.params.q }).save();

And finally, we need our /latest route to return the most recent 10 entries. For that, we’re going to query the database like so:

History.find({}, 'term when -_id').sort('-when').limit(10).then(results => {

Here, we first pass in an empty object, which will return all documents. Then, we specify that in those documents, we only are interested in the search term and the date, and we’d like to specifically exclude the unique _id field from the results. Afterwards, we instruct the query to sort the results in descending order and to only return the ten most recent documents. Finally, we pass those results through a promise and return them as JSON.

Go ahead and run your app with Nodemon and verify that everything is working. If so, you’re ready for deployment. Since we’re using ES6, you’ll need to define your node engine in your package.json, otherwise Heroku will, at the time of this writing, default to version 5.1.1 which won’t support ES6 unless you use strict mode. Also, you’ll need to make sure to add a start script to your package.json. To see what those look like, check out the file on my GitHub. And finally, make sure to drop your environment variables into your Heroku app, otherwise you definitely won’t be able to connect to your database.

That’s all there is to it. Here’s my final code for the project. Personally I went back and threw in pug as a template engine and a favicon server. The final deployed version can be seen here. Feel free to leave any questions or comments below.

PGP Encryption

A few weeks ago I gave a lightning talk on a piece of technology that I personally am fascinated with, PGP encryption. To anyone who is new to this type of encryption, or to cryptography as a whole, here are my slides and talking points.


So what is cryptography? Cryptography is the science of using mathematics to encrypt and decrypt data. This is especially useful when that data needs to be transmitted over insecure networks, such as the internet.

In order to encrypt data, you must first apply a cryptographic algorithm (mathematical function) to it, also known as a cipher. This cipher typically will have one input (a key) which is used in both the encryption and decryption process.

In the end, the security of your data is entirely dependent on two things: the strength of your cipher and the secrecy of your key.


In what is referred to as “conventional cryptography” one key is used for both the encryption and decryption process. This type of cryptography is also known as secret-key or symmetric-key encryption. In Hollywood, you’ve probably seen a spy movie or two where the secret agent has a briefcase locked to their wrists. That briefcase likely contains a secret key that the agent will die trying to protect. Again, the strength of this encryption almost entirely depends on the ability for the agent to keep that briefcase secured from his/her enemies. That is why this type of encryption is losing popularity in modern times – especially if you consider that Julius Caesar himself used it over 2,000 years ago.


If you’re a software engineer, you’ve probably encountered a Caesar’s Cipher coding challenge before. For those that haven’t, let me explain the concept. When Caesar wanted to send messages to his generals, he didn’t have the luxury of WhatsApp and their built-in Curve25519 encryption. No, he had to write his messages down and ask a messenger to deliver them.

Unfortunately, a man of power such as his entitled him to an equal amount of enemies. As such, Caesar devised of a plan so that his messages would mean nothing if they ended up falling into the wrong hands. That plan was to shift all letters in the alphabet up by three letters, better known as “shift by 3”. This way, only those who knew the “shift by 3” rule could decode his messages.


How can modern technology solve for inherit security holes caused by conventional cryptography? One way is with public key cryptography. This type of cryptography uses a pair of keys for encryption. One key is public and is in charge of the encryption process. The other is a private key which corresponds to the public key and handles the decryption process. In this form of encryption, one publishes their public key for anyone and everyone to see. But, any message encrypted with that public key can only be decrypted by the holder of the public key. This enables secure communication between parties without any need to previously disclose a secret key with one another.

A popular example of this encryption technique that you may have used without even realizing it is RSA.


And this finally brings us to PGP encryption. PGP, short for Pretty Good Privacy, combines some of the best features of both conventional and public key cryptography – it is lovingly known as a hybrid cryptosystem.

Just like in public key cryptography, users share their public keys and keep their private keys private. But, PGP has a few advantages over traditional public key encryption.

First, the data is compressed. Compression will not only save on disk space and reduce transfer times, but also can strengthen the encryption. When a hacker tried to crack an encrypted message, they’ll often utilize exploits that search for patterns in the raw data. Compression makes these patterns even harder for a computer to identify.

Next, PGP will create a session key, which is effectively a temporary secret key that will only be used once. This session key is created from the movements of the encryptor’s mouse and their keystrokes. At this point, the data is encrypted with that secret key using conventional techniques – benefitting from a fast and secure algorithm.

Once the data has been encrypted, the session key is then also encrypted to the recipient’s public key. The data and the session key are then wrapped up together and transmitted to the recipient for decoding, which essentially reverses this process.

If you want to give PGP encryption a try, check out these easy to use online encryption and decryption tools. But, once you’re ready to actively use PGP encryption for your communication needs though, you should not use an online service as you don’t have full control and ownership of your session keys and never really know what data is being transmitted behind the scenes. For those that want to dive deeper into secure PGP technologies, check out GPGTools.


Want to learn more about encryption? Khan Academy has a great introductory series called Journey into Cryptography. There, you’ll learn about a wide variety of encryption techniques and technologies, including the one-time pad. I personally became interested in one-time pad encryption after hearing a fascinating story about encryption techniques used during World War II. To hear more about that story, check out the Vox Ex Machina episode from one of my personal favorite podcasts, 99% Invisible.

Promises – What are they and how do I use them?

While Javascript is single threaded, we as humans are not. We have the ability to process multiple requests at the same time. This enables us to efficiently go about our days without letting minor obstacles stop our progress. If we see a pothole in the road, we shift to the side to avoid it. So how can we take that same aspect and apply it to our projects? Therein lies the beauty of asynchronous code.

In the past, Javascript developers had to rely primarily on callbacks or additional libraries to east the pain of “callback hell” when making external API requests. And, while callbacks may still have a place in your codebase when making local requests or awaiting a function to finish executing – wouldn’t it be great if we had a more structured and reliable way of retrieving and processing data asynchronously from an external source? We do, and they’re called promises.

To keep things neat, I’ll show some examples of how to utilize promises using the jQuery.ajax() method. In recent versions of jQuery, the objects returned by this method have an implementation of the promise interface and will make these methods a bit easier to show off. If desired though, a native asynchronous request could be wrapped in a new Promise() function with similar results.

Let’s dive right in and make an AJAX GET request. For illustration purposes, I’ll use the GitHub API results for my account.

let alertMessage;
let getGithub = $.ajax({
  method: 'GET',
  url: '',
  success: function (response) {
    alertMessage = response.location;

console.log('Location: ' + alertMessage);

Now, we’ve already made our first asynchronous request and we have our getGithub variable referencing that response. But, before we made that call, we set created a variable called alertMessage, and then after we made the request, we asked for the console to log what should be my current location of ‘Austin, TX’. So is that what’s going to log to the console? Not without a promise it won’t.

Once again, our AJAX call is an asynchronous request. So although the console.log function is called below it in our code, that function isn’t going to wait to execute while our app is making a request to the GitHub API. So it will run before our alertMessage variable has been defined, and we’ll see Location: undefined logged to the console.

So how can we wait until our location has been passed into the alertMessage variable before we call an alert? With a promise. Our getGithub variable has a promise method attached to it called .then(). This method is going to accept two arguments, one callback function which will resolve on a successful call, and an (optional) one for a rejected call. The beauty here is that whether your call succeeds or fails – we’re waiting for the response before we continue.

Speaking of calls, a phone call is a great way to visualize this. Imagine a phone call between yourself and a friend, Roger, where important information is discussed. You want to call another friend, Shirley, but you can’t until you’ve received all of the pertinent information from Roger and you’ve disconnected with him. After this occurs, you are then able to call Shirley – but not a moment sooner. Promises work similarly – they will wait until the data has fully resolved (or failed) before they proceed.

So if we were to use a .then() method on our previous example, it would look like this:

  console.log("Location: " + alertMessage);

Now, we would successfully get the console to display the location we were hoping for. If we wanted, we could pass in a second parameter to our .then() in the form of another callback function to catch any errors thrown by our original getGithub promise function.

Where promises really shine though is in their ability to chain promises. This way, you can ensure that asynchronous calls have the needed info before they proceed to the next call, which then will ensure it has the needed info, and so on…

Let’s see an example of this in action:

var githubAPI = '';

function getGithubLocation(githubProfile) {
  return $.ajax({
    method: 'GET',
    url: githubProfile

function getCoordinates(location) {
  let coordinatesAPI = `${location.location}&format=JSON&from=1&to=10&indent=false`;
  return $.ajax({
    method: 'GET',
    url: coordinatesAPI,
    dataType: 'jsonp'

function getSunrise(coordinates) {
  let sunriseAPI = `${coordinates.result[0].lat}&lng=${coordinates.result[0].lng}`;
  return $.ajax({
    method: 'GET',
    url: sunriseAPI

function consoleSunrise(data) {


In this example, we’re making three asynchronous AJAX requests. First, we’re creating a function to get my GitHub data. Next, we create a function that passes my location of ‘Austin, TX’ to an API which will convert that string to GPS coordinates. After that, we create a function that will take those coordinates, and pass them to an API which will return the sunrise information for that location. And finally, we pass that string into a function that will print the time to the console. Thanks to promise chaining, we’re then able to link all of those functions together in a neatly structured call.

Goodbye Callback Hell.

Time Complexity and Logarithmic Bar Tricks

In the past couple of days, we have dived deep into data structures and time complexity. Learning the intricacies of computer science has been a rewarding challenge – and has resulted in quite a few “aha” moments. My personal favorite thus far has been binary search trees – especially realizing just how quickly a computer can find a value using an algorithm based on logarithmic complexity. But, even more interesting was realizing that a human can pull that off too. Let’s call it a logarithmic bar trick.

  1. Tell someone you can guess a number between 1 and 1,000,000 in twenty or less guesses, provided they let you know whether your guess is high or low
  2. Use an algorithm of logarithmic complexity O(log n) to guess
  3. ?????
  4. Profit

How does that work exactly? Every time you guess, you’re breaking down the original sample size by 50%. So your first guess is going to be 500,000. They tell you it’s higher. Your second guess is 750,000. They tell you it’s higher. At this point you’ve only made two guesses and you’ve already eliminated 75% of the possibilities. Keep up with this method, and you’ll have their number in twenty guesses tops.

Let’s see how that would look in Javascript.

var min = 1;
var max = 1000000;
var guess;
var number = Math.floor(Math.random() * max) + min;
var counter = 0;

while (guess !== number){
        counter += 1;
	guess = Math.floor((min + max) / 2);
	console.log('Guess #' + counter + ':', guess);
	if (number > guess){
		min = guess + 1;
		console.log('Number is higher');
	} else if (number < guess) {
		max = guess - 1;
		console.log('Number is lower');


First, we set our min and max variables. Then, we'll create a variable to hold our guesses called guess. Next, we'll set a number variable to a random integer between our min and max. And lastly, I want to keep track of how many guesses it's going to take to find the number, so let's create a counter variable and set it to 0.

Then, we build a while loop which will continue to run until our guess variable is equal to our number variable.

Inside of our loop, we first increment our counter variable. Next, we set the guess variable equal to to the average of the min and the max combined.

Next, we setup an if statement that's going to check whether our guess variable is higher or lower than the number. If our guess is too high, we're going to set our max variable to one less than the current guess. If it's too low, we'll set the mix variable to one more than the current guess. That's all there is to it.

For fun, I went ahead and wrapped this in a function, dropped the function call in a for loop that ran 10,000 times, and pushed the counter results to an object after each run. When only running it once, I rarely would see the algorithm find the number in less than 16 guesses. But, when running it 10,000 times, I was able to catch the program guessing correctly on the first try at least once.

Want to up the ante? Tell your friend you can guess a number between 1 and 1,000,000,000 in thirty or less guesses. Thirty? Yup. Just take your maximum number and continually divide by 2 until you hit one or less. The amount of times it takes you to get there is equal to the maximum amount of guesses needed.

1,000,000,000 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 / 2 = 0.93

Helpful links:

Moving the Needle

While my experience with Free Code Camp, Treehouse, and countless other virtual learning platforms have provided me with a great foundation in web development, I decided that I wanted to stop permitting this passion to only thrive in my spare time. Thus, I have chosen a path that allows me live, eat, and breathe software engineering all day, every day, for the rest of the year. I have recently joined cohort #47 at Hack Reactor ATX (formerly MakerSquare). Currently, I am only a few hours into day three and here’s what we’ve already covered:

  • Scopes
  • Closures
  • “This”
  • Value vs. Reference
  • Debugging
  • Data Structures (Stack & Queue)
  • Prototype Chains
  • Complexity Analysis

It has already been a real challenge during some pieces, but the opportunities to learn from those bumps in the road have already been tremendous.

Building a URL Shortener with MongoDB, Express, and Node.js

Personally, it wasn’t until I had done a few laps around Node.js until I finally felt like I had got my head around what it was capable of. So when I started the URL Shortener Microservice project, I should have expected a similar struggle for my first MongoDB application. But I didn’t.

When I had first tested the waters of learnyoumongo, I immediately felt I was in over my head. I had very little prior experience working with databases, so I took a step back and began to research the fundamentals a bit further. After I felt comfortable with databases, I took a few SQL courses to familiarize myself with relational databases. Next, I dived back into MongoDB with some great tutorials via Treehouse and Code School. At this point, I felt comfortable with the basics of querying and inserting with MongoDB, but the next step of incorporating the M into the MEAN stack was my next challenge. After a failed attempt to translate coligo’s tutorial into an API, I started from scratch.

That was the best decision I had made yet. Tackling this project step-by-step reminded me that this was the best way to learn, one small step at a time. And now that I’ve finished this project, I’m going to walk through the build process one more time. I hope this tutorial is helpful, and if anyone has any questions, comments, or critiques, please feel free to share them.

First, go ahead and initialize a new git repo (I’m going to call mine url-shortener-microservice), jump into it, and setup a directory to test out our database –
git init url-shortener-microservice
cd url-shortener-microservice

Next, setup your project’s npm and fill in any information you find necessary. I’m just going to roll through and leave all the options as their default.
npm init

Now, let’s use express-generator to scaffold our app quickly. If you get a warning about the destination not being empty, just type y to continue. If you haven’t used express-generator before, you’ll need to first install it globally.
npm install

Also, let’s create a .gitignore file to stop tracking the node_modules folder
echo node_modules > .gitignore

And for the final piece of the setup, let’s install all of the additional npm packages we’re going to need for this project
npm install mongodb --save
npm install shortid --save
npm install valid-url --save

Go ahead and run your app and connect to localhost:3000 in your browser. You should be seeing “Welcome to Express”. If you don’t have nodemon installed, you’ll want to install that globally too.
nodemon app


Now, startup your MongoDB daemon

Let’s setup our database and collection. Open a third tab on Terminal and run mongo. Then, we’ll create a database called url-shortener-microservice.
use url-shortener-microservice

Okay this time, we’re actually done with the setup.

Open up your routes/index.js file. I’m not going to touch on building out your / route, that’ll be up to you. So, let’s do some importing of modules. First and foremost, you’re going to need to import MongoDB. Then, import the shortid module. This will help us generate unique links for each url we pass through. And finally, import the valid-url module. As the name implies, we’re going to use that to verify that our urls are formatted properly.

var mongodb = require('mongodb');
var shortid = require('shortid');
var validUrl = require('valid-url');

Alright let’s finally get to the code here. First, we’re going to work on the creation of new links. Create a new GET route that looks like this:

router.get('/new/:url(*)', function (req, res, next) {

The (*) piece in our :url(*) parameter will allow us to pass in properly formatted links. Without it, Express will get confused with the forward slashes in URLs and think they’re additional parts of the route. You can also use regular expressions to accomplish this.

Let’s pause here and make sure everything is still working properly. Drop a console.log into your new route that utilizes the request parameter and test it out. My code looks like this so far:

router.get('/new/:url(*)', function (req, res, next) {

And when I try to access http://localhost:3000/new/ in my browser, I see in the browser.

Alright. So far so good.

Now, let’s get our connection to our local MongoDB database up and running. We’ll create a variable underneath where we imported our MongoDB module at the top in order to store our connection information. I’m going to call my variable mLab, as the database will eventually need to move to the cloud and I’m going to use mLab’s free offering to host it. Also, create a variable called MongoClient to host MongoDB’s connect command.
var mLab = "mongodb://localhost:27017/url-shortener-microservice";
var MongoClient = mongodb.MongoClient

Then, back in our recently created route, replace the console.log we created with the following:

MongoClient.connect(mLab, function (err, db) {
  if (err) {
    console.log("Unable to connect to server", err);
  } else {
    console.log("Connected to server")

Now, whenever this route is accessed, MongoDB will connect to our local database and print a message to the console.

Alright, jump below the successful connection console.log in your else statement and create two more variables. The first will set up our collection and make it a bit easier to access, and the second will be set to our url parameter.

var collection = db.collection('links');
var params = req.params.url;

Now, we’re going to create the function that imports a link to the database and returns a short link. We’ll call it newLink and it will accept a callback that will close the database connection once it’s run.

var newLink = function (db, callback) {

newLink(db, function () {

Okay it’s been a while since we’ve run any tests, so let’s see if we’re on track so far by importing some documents into our links collection. Inside of our newLink function, insert the following:

var insertLink = { url: params, short: "test" };

This will create a new object with our passed-thru parameters set to the url key, and “test” set to the short key. Then, it will push that object into a document in our database. And finally, we’re going to send our URL parameter to output again, just like in our last test.

Now, fire up your browser and point it to http://localhost:3000/new/

Once you see output onto the page, open up your mongo tab in Terminal and type db.links.find(). You should see something like this:

  "_id": ObjectId("572a780bcf012a51ee123b3b"),
  "url": "",
  "short": "test"
Fetched 1 record(s) in 7ms

Tip – To make working in MongoDB a bit easier in Terminal, do a global install of mongo-hacker

Alright let’s keep moving. We’re going to want to do three things when a URL is passed thru as a parameter:

  1. Check if the URL is valid
  2. If it is, assign a random set of characters to it
  3. Pass the URL and the random characters into our collection

So first, create an if/else statement utilizing our valid-url module in the newLink function. Replace the three lines we dropped in there during our last test.

if (validUrl.isUri(params)) {
  // if URL is valid, do this
} else {
  // if URL is invalid, do this

If a URL is valid, generate a short code. Then, create a new object. Insert that object into the collection as a new document like we did in our test. And finally, let’s push some JSON to our browser.

var shortCode = shortid.generate();
var newUrl = { url: params, short: shortCode };
res.json({ original_url: params, short_url: "localhost:3000/" + shortCode });

If the URL isn’t valid, make sure to output an error.
res.json({ error: "Wrong url format, make sure you have a valid protocol and real site." });

Currently, this is what our index.js file should look like. Hopefully yours looks the same and you’re successfully pushing new links into your database. We’re halfway through the meat of this project!

Now, let’s look into redirection. We’re going to set up another route that once again connects to our database, runs a function, and closes the database once that function has run. The bulk of it will look similar to our last route:

router.get('/:short', function (req, res, next) {

  MongoClient.connect(mLab, function (err, db) {
    if (err) {
      console.log("Unable to connect to server", err);
    } else {
      console.log("Connected to server")

      var collection = db.collection('links');
      var params = req.params.short;

      var findLink = function (db, callback) {

      findLink(db, function () {


Now that we have that setup, we’re going to want to take the parameter that has been passed through and find it in our collection. I’m going to use the .findOne query since the short codes are unique values, we don’t want to waste resources looking for additional matches. We’re also going to limit the query to only return the url field, as all other fields are unnecessary for our needs. Our query is going to look like this:
collection.findOne({ "short": params }, { url: 1, _id: 0 })

After the query is run, we’re going to pass in a function. If a document is found, the function will return it. Once it’s returned, we’re going to use a res.redirect() to redirect the browser to the value of the returned key/value pair. If the document is not found, we’ll output another JSON error.

collection.findOne({ "short": params }, { url: 1, _id: 0 }, function (err, doc) {
  if (doc != null) {
  } else {
    res.json({ error: "No corresponding shortlink found in the database." });

And that’s it. We now have a functioning URL shortener! Now let’s make some tweaks to improve it. In our first route, let’s built an if/else statement that queries our database before we drop a link into it to check if that link already exists, so that we can keep the size of our database down. It will look similar to the query we just built, except this time we’re looking for the url and we’re only returning the short code. This will run at the top of the newLink function, and will be followed by an else statement that runs the original code to create a new link if the query is unsuccessful.

collection.findOne({ "url": url }, { short: 1, _id: 0 }, function (err, doc) {
  if (doc != null) {
res.json({ original_url: url, short_url: local + '/' + doc.short });

At this point, our index.js file should look like this. We’ve got a few more minor tweaks to do, and then we can move our database to mLab. First, I think having underscores and dashes as options in our short codes is a bit confusing. Underneath where we imported our shortid module, let’s set a new list of characters that replaces _ and – with $ and @.

Next, let’s make our JSON output a bit more dynamic so we don’t have to update the string when we push it to deployment. In the /new/:url(*) route, add var local = req.get('host'); + "/" above the newLink function. Then, replace the localhost:3000/ string in both JSON responses to your new local variable.

Alright, let’s move our database onto a remote server. I’m using mLab, but if you’re more comfortable with an alternative, go right ahead and use that. In mLab, hit the “Create new” button, choose “Single-node”, and then select “Sandbox”. For consistency, let’s use “url-short” as our database name. Once you’ve created your database, it’s going to ask you to create a user in order to access your database. For this example, I’ll use “user” and “pass” – but you’re probably going to want to use something a bit more secure in your app. Alright, now grab the link that’s presented and replace your mLab variable with it. Make sure to drop in your username and password in the corresponding placeholder fields.
var mLab = "mongodb://";

Now, back to the browser to see if it’s working. I’m going to attempt to access http://localhost:3000/new/ again. I’m seeing a JSON response, so that’s a good sign. Now let’s hit the “Collections” tab in mLab, refresh the page, and see if our collection is there.

Yup, we have a links collection with one document in it. Success!

Before we push this to Heroku, you’re going to want to exclude your database username/password information from your Github repo. Create a file called config.js in your root directory, and paste the following into it:

var config = {};

config.db = {}; = '; = 'url-shortener-microservice';

module.exports = config;

Then, in your index.js file. Drop the following in between your mongodb and MongoClient variables:

var config = require('../config');
var mLab = 'mongodb://' + + '/' +;

Now, in your .gitignore file, add config.js. Push your code to Github, and then create a new branch called heroku and remove config.js from your .gitignore file. Push your app to Heroku, grab a beer, and pat yourself on the back.

Here’s the final code.

That’s it. My personal next steps for this project are to DRY it out a bit, create an option for custom short codes, and then dive into a dashboard with some analytics. Again, if you have any questions, comments, or critiques, please feel free to share them.

Node.js Resources for Beginners

My main focus recently has been on wrapping my head around Node.js. My first introduction was via a Treehouse course a few months back. I breezed through the course, but finished it not understanding what the hell I just did, nor what type of practical applications Node.js could help me with. I then attempted at learnyounode on Free Code Camp and think I confused myself even more. So, I went back to Treehouse, took the Node.js courses again and felt a bit more comfortable. Unfortunately though, that course is a bit outdated. Hopefully Treehouse does an update soon.

Now I’ve once again returned to learnyounode and it’s starting to click, a little. I’ve heard from other Free Code Camp students that are struggling with similar questions, so I’m going to compile a list of resources in this post that can hopefully help others.

First, what is Node.js?

Node.js is V8 (the JavaScript engine running inside Google Chrome) bundled together with a couple of libraries, mainly to do I/O – i.e. writing files and handling network connections.

It’s important to note that Node.js isn’t any special dialect of JavaScript – it is just normal, modern JavaScript, running everywhere instead of just the browser.

Node.js allows developers to use JavaScript everywhere instead of just in browsers – the two big mainstream uses as of writing are web/app servers (Node.js is very well-suited for messaging-like applications like chat servers, for example) and Internet of Things (running inside Arduino-like devices).

Mattias Petter Johansson (This guy makes some great videos on Youtube)

Okay, but why the hell would I use Node.js?

In one sentence: Node.js shines in real-time web applications employing push technology over websockets. What is so revolutionary about that? Well, after over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections, where both the client and server can initiate communication, allowing them to exchange data freely. This is in stark contrast to the typical web response paradigm, where the client always initiates communication. Additionally, it’s all based on the open web stack (HTML, CSS and JS) running over the standard port 80.

Tomislav Capan (read the entire post here)

Still confused? Go here and watch every preview through lesson @9. For me, that was the “aha!” moment.

How can I learn Node.js then?

  1. Work through how-to-npm, learnyounode, and learnyouexpress. Repeat each 2-3 times until you are comfortable
  2. Watch this playlist from thenewboston
  3. Watch this playlist from Derek Banas
  4. Do this tutorial by Chris Sevilleja
  5. Check out more tutorials

More Resources:

Final Update of 2015

Finishing Free Code Camp has become my white rabbit, or is it white whale? For the past few months, the closer I would get to finishing my full stack certification, the more I would see new lessons appear and the overall program structure change. Now, as 2015 comes to a close, Free Code Camp is making their largest program changes to date, with the implementation of two new certifications – data visualization and back end development. Do these two new certifications make my whalerabbit swimhop a bit further away? Sure. But as far as I’m concerned, this change just adds a whole new set of skills to be learned, and I’m eager to get on them.

Right now I’m in the midst of the NPM/Node.js/Express.js/MongoDB section, and I’m not terribly impressed with the content. These command-line tutorials have been a bit hard to follow, which has brought me back to Treehouse a lot recently. As I continue to progress, I’m starting to find new holes in my knowledge that Treehouse has been able to fill quite well, such as their console foundations course. I’m starting to get the hang of Git, but I think I’m going to take another introductory course to really hammer it down.

Most of all, I’m excited for the new API Basejump projects that are coming soon to Free Code Camp. Going to take a little time off from coding for the holidays to fish and spend time with friends and family, so hopefully those are ready for me to dive into come 2016.


Copyright © 2017 Thoughts

Theme by Anders NorenUp ↑