Word Ownership

Today’s Internet is nothing like those first few years of experience I had as a teenager and student in college. I know nostalgia can be strong (and biased), but I have memories of so many friends and acquaintances running their own small websites and forums to share and talk about hobbies, interests, and ideas.

Then, the rise of publishing platforms and social media began. Over time, most voices shifted sharing information via these new lower friction, higher audience venues. As a result, many of us have chosen to share our ideas within the walled gardens of giant corporations. And, while we still own our words there, they are hard to edit, curate, organize, and share.

I have been more recently thinking about this due to the increasing turmoil of modern social media platforms; whether from drastic changes as the result of new owners, or pivots of the business in the continual chase for attention and advertising revenue. The most profound reminder came from a re-post of Scott Hanselman about the power of owning your own words.

I don’t want ideas and topics that I write about to be stuck (and buried) inside the data center of a company that could change their rules at at any time. This doesn’t mean I will stop using social media, but I’m going to focus more time and energy writing for my own space again on the Internet.

This blog has been mostly reserved for the occasional post about how I came to understand and solve some software development problems I have had in the past. I hope to continue those posts, in addition to more general thoughts and experiences around my technical career.

Use HTML <details/> tag to hide/display content

Documentation is an important part of software development, But sometimes, large blocks of code or steps with multiple, lengthy steps can overtake the main points being made to the reader.

In these situations, try using the HTML <details> tag to allow for content to be initially hidden and expanded if the reader wants to explore the content in-depth.

This <details> tag provides for a summary with an expandable detailed view when clicked. See the example below:

Click to expand

The code that just made this work looks like this:

<details>
  <summary>Click to expand</summary>
  The code that just made this work looks like this:
</details>


As of this post, this tag is supported in all major browsers.

See the article from the Mozilla Developer Network (MDN) for more details.

Converting a CSV file to Markdown using SQLite

Have you ever needed to convert the contents of a CSV file to Markdown table format? SQLite comes to the rescue with just a handful of commands.

From time to time I have a need create a table in Markdown format. Usually the original source is some kind of data sitting in a CSV file. There are quite a few converters out there, and of course, I could write my own. However, with my deepening interest in SQLite, I wanted to write a utility that used its power to do the work for me.

In late 2020, SQLite released 4 new output modes for the CLI tool. One of great interest to me was ‘markdown’

.mode markdown

Knowing how easy it is to import a CSV file into SQLite, I felt that a conversion from CSV to Markdown was probably just a few lines away.

First Try

$ sqlite3
sqlite> .headers on
sqlite> .mode csv
sqlite> drop table if exists temp_csvFile;
sqlite> .import input_file.csv temp_csvFile
sqlite> .mode markdown
sqlite> .headers on
sqlite> .output output_file.md
sqlite> select * from temp_csvFile;

Let’s go through each line to understand what is happening.


Once the SQLite CLI is launched, we make sure headers are on. This will ensure the CSV import to SQLite goes as expected.

 .headers on

Next, we ensure that the mode is set to ‘csv’ as that will be the format of the file being imported.

.mode csv

Third, as a safety measure, drop the table we want to import to if it exists.

drop table if exists temp_csvFile;

Now perform the CSV file import.

.import input_file.csv temp_csvFile

At this point the entire contents of the CSV should be sitting in the ‘temp_csvFile’ table.

We want to have our data in a nicely formatted Markdown table once this is all done. To do this we change the SQLite mode to ‘markdown.

 .mode markdown

As a safety check, we make sure that column headers are turned on. We want them included in our output!

 .headers on

By default, the output from a SQL statement comes to the command line. In our case we want to write the output to a file. Let’s specify that now.

.output output_file.md

The last step is to run our select statement and grab everything from the table that holds the contents from the CSV file.

select * from temp_csvFile;

Now we should have a file in Markdown table format that represents the contents of the CSV file!


This works great, but its quite a bit of typing. Fortunately the SQLite CLI tool has a command that will take in a file with multiple commands and execute them for you. It will even accept .(dot) commands!

Second Try

Let’s create a file with all of the above commands and then let the SQLite CLI tool read it in and execute it for us.

csv2markdown.sql

.headers on
.mode csv
drop table if exists temp_csvFile;
.import input_file.csv temp_csvFile
.mode markdown
.headers on
.output output_file.md
select * from temp_csvFile;

Now we can use this SQL file and let SQLite read and execute the contents.

sqlite3 '' '.read csv2markdown.sql'

In the above line we are opening a nonexistent database signified by ”. The second parameter is the most interesting:

'.read csv2markdown.sql'

This tells the SQLite CLI to read and execute all commands in the file named ‘csv2markdown.sql’.

Third Try

At this point, we have a handy SQL file we can pass to SQLite and make a CSV file to Markdown table conversion.

One downside to the SQL file though is that the name of our input file is hard coded. Let’s fix that.

csv2md.sh

#!/bin/bash
 

outputFile="$1.md"
rm -f outputFile
 

cmd=$({ grep -v '^#' <<EOF
.headers on
.mode csv
drop table if exists temp_csvFile;
.import $1 temp_csvFile
.mode markdown
.headers on
.output $outputFile
select * from temp_csvFile;
EOF
})
 

echo -e "$cmd" | sqlite3 ''

This is a bash shell script that accepts a file name as an input parameter, creates the list of commands needed by SQLite, and then executes them.

Let’s walk through the code.


First, we take the name of the input file (our CSV file), and create the name of the file to be output (by adding the .md extension). We also delete any existing output file by the same name if it already exists.

So, if our input file is myFile.csv, the output file will be myFile.csv.md

outputFile="$1.md"
rm -f outputFile

The next section of the script creates the same contents as originally existed in our csv2markdown.sql file, except now the input and output file names are based on the file that is passed to our script.

(Hint: This took me a bit of time to get right and understand. Read up on bash heredoc for a better understanding.)

cmd=$({ grep -v '^#' <<EOF
.headers on
.mode csv
drop table if exists temp_csvFile;
.import $1 temp_csvFile
.mode markdown
.headers on
.output $outputFile
select * from temp_csvFile;
EOF
})

Now that we have dynamically created the commands, pass them to SQLite for execution.

echo -e "$cmd" | sqlite3 ''

After changing the execution permissions on our script file (chmod +x csv2md.sh), the entire conversion is now just a single command.

./csv2md.sh myFile.csv

Wrap up

So there it is, a fun little script to convert the contents of a CSV file to Markdown table format using the power of SQLite.

Here is the full script if you want to use and modify it for your own use: https://gist.github.com/log4code/079d1d1f02ea2c766951b220ce5b23f5

Some great additions to the script could include

  • Accepting a file output parameters
  • Verifying parameters
  • Being able to specify an alternative SELECT statement outside of SELECT * to provide for on-the-fly filtering while also converting.

Main Photo by Mika Baumeister @Unsplash

SQLite dot-commands: output formatting using ‘.mode list’

The default output mode for SQLite is the `list` mode. Let’s look at ways to customize how it outputs data from queries and gotchas to look out for.

Only have a minute? Skip down to the tl;dr;.

The command line shell for SQLite provides a variety of ways to output query results to fit the needs of your project. In all, the latest version of the sqlite3 command shell provides 10 different ways to output query results. In this post we will cover the default output mode, list, and see the different options that are available.

All the examples below can be run using the Repl.it link here: https://repl.it/@log4code/SQLiteOutputFormats.

For this example we have a SQLite database with just one table. This table is named companies. It contains a small list of company records which we will query in different ways to show a variety of output formatting available via the .mode list command for the SQLite shell.

As previously mentioned, the list mode for SQLite is the default mode for output for the shell. There is no activation required. To show this, our first query will select all the rows and columns from the companies table.

select * from companies;

Here we can see the output with the default list mode:

1|Southern Tool Company|AL
2|Ohio Valley Tooling|OH
3|Midwest Machining, LLC|IN
4|Pacific Parts|CA
5|ABC Manufacturing|ME
6|Taylor & Sons Manufacturing|FL
7|Backlot Machining, Inc.|AZ

Each row in the result set is on a single line. Each column is separated by the pipe (|) character. For list mode, the pipe character is the default column separator.

Adding Column Headers

Notice in the output that we do not have column headers. This may or may not be preferred based on your needs. But, what if we wanted column headers? Adding column headers is as simple as performing an additional ‘dot command’ for the shell:

.headers on

The .headers command takes an argument (on or off) to allow for changing the output back and forth. To see the previous output with column headers, we will adjust our statements:

.headers on
select * from companies;

Now we have successfully added column headers to the output. Each column is separated by the same delimiter, in this case the | character.

company_id|company_name|company_state
1|Southern Tool Company|AL
2|Ohio Valley Tooling|OH
3|Midwest Machining, LLC|IN
4|Pacific Parts|CA
5|ABC Manufacturing|ME
6|Taylor & Sons Manufacturing|FL
7|Backlot Machining, Inc.|AZ

Changing the delimiter

While the | character is the default delimiter, it can be changed. What if one of the values returned contained a | character?

For example, what if instead of ‘Pacific Parts’ it was ‘Pacific|Parts’? The output for this row would now look like this:

company_id|company_name|company_state
4|Pacific|Parts|CA

Oops! Now we have 4 columns for that row instead of 3. Pacific and Parts now look like values in two separate columns instead of one. This is a good reminder of the importance of knowing about the data you are working with and checking the output for any issues you may need to be aware of.

We can solve this problem multiple ways. One way is to specify the output column delimiter to be different than the default |. We do this with the .seperator command

.headers on
.separator '|-|'
select * from companies;

Now we have specified the column separator to be what we believe should be unique: |-|. Let’s check out the output:

company_id|-|company_name|-|company_state
1|-|Southern Tool Company|-|AL
2|-|Ohio Valley Tooling|-|OH
3|-|Midwest Machining, LLC|-|IN
4|-|Pacific|Parts|-|CA
5|-|ABC Manufacturing|-|ME
6|-|Taylor & Sons Manufacturing|-|FL
7|-|Backlot Machining, Inc.|-|AZ

The column delimiter has indeed been changed from | to |-|. Now the Pacific|Parts value will work just fine with the output.

tl;dr;

  • The default output for the SQLite shell is .mode list.
  • This will output each row on a line with the default column delimiter of the | character.
  • To change the default delimiter character, use the .separator command
  • By default the .mode list command does not output column headers
  • To add column headers to the output, use the .headers on command.

F# infinite sequence to read console input

F# infinite sequences can be used to continuously prompt for user input from the console. This is a good alternative to recursive functions.

Objective

As of the time of this post, I have only been learning F# for about 2 weeks. Having previously completed a series of F# koans by Chris Marinos, I was eager to branch out and start building something workable in the language. I remembered when taking a Haskell class in college that one assignment was to create a Mastermind like game that required the player to try and guess a 5-letter word from the English language. We were supplied with a flat file of all 5-letter English words and were tasked with creating the game using Haskell. I thought that trying to do the same would be a great way to learn different aspects and modules within F#.

Game loop

The first task I wanted to complete was to figure out how to create a game loop within F#. For the game loop to work properly, it should be able to perform the following actions for an infinite amount of time:

  1. Request the guess from the user for the correct word
  2. Evaluate the guess
  3. Return to step 1 if the guess is not correct

For this series of tutorials, I will be hosting the live code on repl.it for anyone that wants to progress through the samples and see a working version.

First attempt

My first attempt at the game loop was to write a function with tail recursion (although nothing to calculate yet) to eliminate stack overflow exceptions and keep things nice and simple.

This is a pretty straight-forward approach with a game loop that gets the user input from the Console, outputs the input to the console, and then repeats the game loop.

open System

let getInput () =
    printf "guess:>"
    Console.ReadLine ()

let output (s:string) =
    printfn "You typed: %s" s

let rec gameLoop() =
    let input = getInput ()
    output input
    gameLoop() 

[&lt;EntryPoint>]
let main argv =
    gameLoop()
    0 // return an integer exit code

call stack after several iterations. no overflow!!

Game Play


Second attempt

Knowing that recursion can be difficult sometimes to wrap my brain around, and not wanting to constantly worry about stack overflow bugs, I decided to see if there was another approach utilizing the power of existing F# modules. Having been through the ‘koans’ course listed above, I decided to experiment with using F# sequences to replicate the game loop instead of using a recursive function.

The F# Seq module comes with many handy functions for creating and interacting with sequences. Of particular interest is the ability to create an F# infinite sequence that is lazily evaluated as needed.

For my game loop, I made a few changes in order to take advantage of using an infinite sequence. Each iteration in the sequence prompts for input and then outputs that input back to the Console.

open System

let printPrompt () = 
    Console.Write "guess:>"

let getLine = fun _ -> 
    printPrompt ()
    System.Console.ReadLine ()

let writeResponse (s:string) =
    printfn "You typed: %s" s

let progLoop () =
    let lines = Seq.initInfinite getLine
    Seq.iter writeResponse lines

[&lt;EntryPoint>]
let main argv = 
    progLoop () |> ignore
    0

In order to allow for the function ‘getLine’ to be used as the infinite sequence generator, the declaration was changed slightly to allow for a wildcard (denoted by the underscore character) parameter. This is needed as the function used for Seq.initInfinite takes a function that requires the int index of the sequence to generate. Since it is not needed right now for this sequence generation, it is declared as a wildcard and not an int.

The printPrompt function was also created to separate out the Console prompt writing as a separate function.

let getLine = fun _ -> 
    printPrompt ()
    System.Console.ReadLine ()

The progLoop function has now also been changed to use sequences instead of a recursive function. Once the sequence is defined using Seq.initInfinite, it is iterated over using Seq.iter. The iterator applies the function writeResponse to each item in the sequence.

let progLoop _ =
    let lines = Seq.initInfinite getLine
    Seq.iter writeResponse lines

Since each item in the sequence pauses to ask for user input, the game loop of prompting for input and performing an action with that input is complete.
[image here]

Conclusion

So there it is, a simple game loop that continuously prompts for a single line of user input and repeats it back to the Console. The next post in this series will cover processing the input from the user to be able to gracefully exit the game. Stay tuned!

VSCode: debugging code located in Python virtual environment

In trying to set up a virtual environment(virtualenv) for Python to be used in VSCode, I encountered a situation where I was unable to debug and hit the breakpoints in my code.

Problem

In trying to set up a virtual environment(virtualenv) for Python to be used in VSCode, I encountered a situation where I was unable to debug and hit the breakpoints in my code. My first steps were to verify that I was following the documentation for selecting the correct Python interpreter. When this did not solve my problem, I dug deeper into VSCode debugging and virtual environments. Below is what I learned.

Setup

It had been awhile since using VSCode, so I wanted to use a current Python project to refamiliarize myself with the IDE. I also have had limited exposure to virtual environments in Python, and I wanted to make sure that I at least attempted to set up my project properly.

Once my virtual environment was set up, I wanted to double check that I could debug my code in VSCode using the interpreter located in the virtual environment.

Simple Python script

a <span class="token operator">=</span> <span class="token number">1</span>
b <span class="token operator">=</span> <span class="token number">2</span>   <span class="token comment"># put breakpoint here</span>

<span class="token keyword">print</span><span class="token punctuation">(</span>a<span class="token operator">+</span>b<span class="token punctuation">)</span>

Verify correct interpreter is selected

Breakpoints skipped
I continued, and set a breakpoint as indicated in the code above and started debugging in Python. I was expecting my breakpoint to be recognized and for execution to break for inspection, but the script continue to run to completion without ever breaking. I double checked my interpreter selection and tried again. Same behavior.

On a whim I decided to select a different interpreter (the base Anaconda interpreter). The breakpoint was recognized and execution paused for inspection! What? Now I was intrigued.

I tried selecting different interpreters on my machine located in various other older virtual environment setups. Each time the breakpoint was recognized and execution paused. I then realized that my python script file was located inside of my virtual environment directory. It was becoming clear that this could be some kind of path or file location issue.

To test my location hypothosis, I moved my Python script outside of my virtual environment directory, selected the desired inpterpreter and tried debugging again. Success! My breakpoint was recognized and the debugger in VSCode paused for inspection.

Research

While I had solved my immediate problem with debuggging, I was not satisfied. I had stumbled upon a viable solution, but wanted to understand the root cause. It took a bit of searching and reading through GitHub VSCode issues and discussions before finding two separate threads that alluded to a proper explaination and solution.

I also spent some time reading up on how to use Python virtual environments and advice against putting you application code inside of the virtual environment directory. By not doing this it allows for clean separatation and the proper use of the requirements.txt file for properly creating Python environments.

I probably spent 2 hours researching the background to my problem, and while frustrating at times, it was time well spent. Hopefully this post will surface for others searching for a solution to the same problem.

Solution

Best Solution
The better solution is to make sure your project code is NOT located inside of the virtual environment directory structure. This is desired anyway in order to separate your code from the environment setup.


Source: https://github.com/Microsoft/vscode-python/issues/2993

Alternative Solution
There is, however, an alternative solution. I would encourage you to test this out even just to learn how the debug configuration setting works in VSCode.

In this solution, we need to add the configuration "debugStdLib": true to our launch.json file. This will enable the debugging of standard library files located within the virtual environment directory. See Debugging Configuration documentation.


Source: https://github.com/Microsoft/vscode-python/issues/2520

launch.json

<span class="token punctuation">{</span>
    <span class="token property">"name"</span><span class="token operator">:</span> <span class="token string">"Python: Current File (Integrated Terminal)"</span><span class="token punctuation">,</span>
    <span class="token property">"type"</span><span class="token operator">:</span> <span class="token string">"python"</span><span class="token punctuation">,</span>
    <span class="token property">"request"</span><span class="token operator">:</span> <span class="token string">"launch"</span><span class="token punctuation">,</span>
    <span class="token property">"program"</span><span class="token operator">:</span> <span class="token string">"${file}"</span><span class="token punctuation">,</span>
    <span class="token property">"console"</span><span class="token operator">:</span> <span class="token string">"integratedTerminal"</span><span class="token punctuation">,</span>
    <span class="token property">"debugStdLib"</span><span class="token operator">:</span> <span class="token boolean">true</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span>

Having already stumbled across the better solution, I also tested the alternative and added the debugStdLib setting to my launch.json file. Using this approach I was able to set breakpoints for a script located inside of the Python virtual environment directory and pause execution for inspection. I would recommend this approach any time you want to step through standard libraries to learn how they work.

Wrap up

Configuration is everything!

Investigating the solution to my VSCode debug problem for Python virtual environments was definitely tedious, but in the end my original problem makes sense. Having learned more about Python virtual environments, I discovered how to better structure my code and separate it from the environment. In addition, I also learned how to debug code third-party code located inside the virtual environment. This knowledge will definitely be used for future problems I encounter.


Featured image by Lux Interaction @ unsplash.com

Install mssql-tools (sqlcmd) on Amazon Linux AMI

The steps below were a result of figuring out how to install mssql-tools on aLinux instance. Along the way I learned a little about `yum` and repository priorities.

Problem

During the past couple of weeks I have set out to learn more about Docker and how I can incorporate it more into development processes to automate and standardize workflows. I have been using a small AWS Linux EC2 instance based on a Amazon Linux AMI. I wanted to use mssql-tools to connect to a SQL Server instance running inside a container. It did not appear that mssql-tools were installed on the machine at the time (In retrospect, it may just have been that I did not have the bin directory set up properly in my Path).

The steps below were a result of figuring out how to install mssql-tools on the Linux instance. Along the way I learned a little about yum and repository priorities. Let’s dig in…

sqlcmd Version

My first step when trying to use sqlcmd was to check the verion

sqlcmd | grep Version

It appeared to be not installed, which is what led me to figuring out how to install it.

As I mentioned before, I should have verified for sure that it was not installed and just not in my PATH yet. Lesson learned!

Uninstall previous version

Although I didn’t think there was a previous version to uninstall, I decided to continue to follow the full installation instructions from Microsoft, which included uninstalling of mssql-tools and unixODBC-utf16-devel.

sudo yum remove mssql-tools unixODBC-utf16-devel

Install current version

The next step for me was to install mssql-tools and unixODBC-devel.

Download the repository configuration file from Microsoft

sudo su
curl https://packages.microsoft.com/config/rhel/7/prod.repo &gt; /etc/yum.repos.d/msprod.repo
exit

Install tools

sudo yum install mssql-tools unixODBC-devel

First snag!

Most interesting was the message:

2 packages excluded due to repository priority protections

Time to do some digging.

yum repository priorities

Given my very limited experience with yum, I did a quick search and found the documentation for yum priorities. The usage sections stated “Packages from repositories with a lower priority will never be used to upgrade packages that were installed from a repository with a higher priority.”. This seemed to be the first indication I was on the right path.

Given that the plugin appeared to be installed and active on this install of Linux, I located the location of the configuration file (/etc/yum/pluginconf.d/priorities.conf) and updated the following line from

[main]
enabled=1

to

[main]
enabled=0

Side note: While writing this post I came across some documentation from Amazon explains why this is on by default for Amazon Linux AMI’s and suggests there is an alternate way around this problem to allow for other repositories. See AWS documentation.

Side note #2: There is an old Centos thread related to yum-priorities that is also worth reading.

With the plugin now disabled, I was successfully able to install mssql-tools and unixODBC-devel and accepts the licensce terms.

Setup PATH

Lastly, I added the bin directory for sqlcmd to my bash PATH in the ~/.bash_profile file…

…and confirmed the install.

Automation

During my search, I came across a post by Kagarlickij Dmitriy. He provides a nice shell script to check your version, turn off yum priorities, and install mssql-tools. Github repository


Featured image by Cesar Carlevarino Aragon @ unsplash.com

First look at SQL Server on Docker

After finishing my previous quick experiment with getting Docker up and running, I immediately wanted to experiment with SQL Server portability using Docker. The overal long term goal is to have standard docker images that all developers on the team could use for development and to assist with the onbording process of new developers to our team.

Goal

After finishing my previous quick experiment with getting Docker up and running, I immediately wanted to experiment with SQL Server portability using Docker. The overal long term goal is to have standard docker images that all developers on the team could use for development and to assist with the onbording process of new developers to our team. By the end of this article, I will show how I used resources provided by Microsoft to successfully get SQL Server running in a Docker container and be able to connect to the instance from outside the container, and from a different machine. Here we go!

Microsoft container images

Thankfully, Microsoft has provided a big helping hand to get started with SQL Server on Docker. For Linux, the provide a standard SQL Server container image and instructions on how to get started. I followed this quick start guide as closely as possible for my setup.

It is so easy to get started on a new project or technology and skip right past the ‘prerequisites’ sections of a tutorial or documentation. I do it all the time, but I am trying to get in a better habit of taking the time to verify everything is in order before beginning. At the very least I learn a new command or location for where something is stored.

I initiallly used the same previous Amazon Liux AMI that I used when first setting up my first exposure to Docker, and while I know that the version of Docker is compliant with the container image from Microsoft, it never hurts to get in the habit of double-checking yourself.

$ docker version

I also wanted to check my Docker storage driver to make sure it is compatable (overlay2) with the image.

$ docker info

Next I checked for total RAM, which Microsoft states should be at least 2GB.

$free -m

Turns out this was still a t2.micro instance, so I had to change the instance size of my EC2 to get more RAM. I chose a t2.medium instance to get a bit more RAM to work with than just the minimum and pick an additional core.

Better! Back on track.

Also checked my available storage(8GB based on the t2.micro instance I had before).

$ df

I decided to bump that up to 30GB using the instructions from AWS for extending a volume size for Linux.

A quick check on 64-bit

$ uname -m

Pull and run the image

Finally, it was time to pull the container image from Microsoft and run it. It took a bit longer than I wanted to get to this point, but succesfully resized an EC2 instance and extended the size of my EBS volume as well!

$ sudo docker pull mcr.microsoft.com/mssql/server:2017-latest

$ sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD={YourNewStrong!Passw0rd}' \
   -p 1433:1433 --name sql1 \
   -d mcr.microsoft.com/mssql/server:2017-latest

Then verified the container was OK

$ docker ps

All looks good.

Connecting to the database

In the interest of learning to use the interactive shell inside of a Docker container I used the example from Microsoft to run SQL Server commands to make sure everything was running OK inside the container.

$ sudo docker exec -it sql1 "bash"
[email protected]:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '{YourNewStrong!Passw0rd}'

This successfully produced a sqlcmd prompt:
1>

From this sqlcmd prompt I quickly ran through the steps from the Microsoft Quick Start Guide to create a database, table, and 2 records within that table.

Connect outside the container

I had finally arrived at the point I was most excited about. Connecting to the SQL Server running in the container from outside of the container.

sqlcmd -S {your server ip},1433 -U SA -P '{YourNewStrong!Passw0rd}'

In my case I wanted to test connecting using SSMS on my local machine to the EC2 instance on AWS running the container (making sure the security group allowed for inbound connections on the correct port from my machine. Success!

Conclusion

I’m really happy I was able to get this to work. It was amazing how straightforward it ended up being since the container image was provided by Microsoft. Some day I will do a full SQL Server install on Linux to take my journey a little deeper.

My next project with Docker will be backing up an existing database to a Docker container to provide a prepackaged image of a database that all developers can pull and test against.


Featured image by Tobias Fischer @ unsplash.com

First hands on with Docker

For me, learning about containers (and Docker in particular) is just part of my journey to keep up to date on the tools of modern software development. I don’t anticipate becoming a container expert or managing large Docker swarms (although you never know!), but having a decent understanding of the tools and technologies around containers fits into my process of using the best tools available for a given problem.

Why learn Docker?

For me, learning about containers (and Docker in particular) is just part of my journey to keep up to date on the tools of modern software development. I don’t anticipate becoming a container expert or managing large Docker swarms (although you never know!), but having a decent understanding of the tools and technologies around containers fits into my process of using the best tools available for a given problem. I am fascinated to learn new ways to accomplish a task, not to mention that using containers for software development seems to be an idea that is sticking around. With that in mind, let the learning begin!

What exactly is Docker?

I won’t even pretend to give a better executive summary of containers and Docker than Docker themselves do, so I will spare you the attempt.

For a good overall summary of what Docker (and containers and images) is, I headed to the official Docker documentation, and was quickly up to speed on the overall benefits and high level idea behind containers.

Install Docker

While I primarily develop on Windows, for this project I wanted to utilize Linux as forced way to brush up on other skills that often do not get enough attention due to time constraints. I happened to have a Linux instance available from a previous project, so I fired it back up and connected to it via Windows Subsystem For Linux and SSH.

I installed the Docker CE (Community Edition)

My Linux install was Ubuntu based (Amazon Linux AMI), so make sure to use the proper instructions for your version of Linux.

Instructions I followed for Docker

Uninstall old versions of Docker

Don’t skip this step! I had a previous version of Docker installed (without knowing what I was doing), so I wanted make sure that was cleaned up first. If nothing was previously installed, nothing bad will happen.

Install and Confirm

Once I cleaned up my old install of Docker, I was able to successfully reinstall the current version of Docker CE, add my user to the docker group, and confirm the running status of Docker. Success!!

Create a Docker image

I ended up following the AWS introduction guide to Docker and created a ‘Hello World’ image that served a static web page from an instance of Apache. The tutorial was pretty straightforward, I only needed to tweak my AWS security group for the EC2 instance to allow for inbound connections on port 80 from my machine.

Dockerfile

FROM ubuntu:16.04

# Install dependencies
RUN apt-get update
RUN apt-get -y install apache2

# Install apache and write hello world message
RUN echo 'Hello World!' &gt; /var/www/html/index.html

# Configure apache
RUN echo '. /etc/apache2/envvars' &gt; /root/run_apache.sh
RUN echo 'mkdir -p /var/run/apache2' &gt;&gt; /root/run_apache.sh
RUN echo 'mkdir -p /var/lock/apache2' &gt;&gt; /root/run_apache.sh
RUN echo '/usr/sbin/apache2 -D FOREGROUND' &gt;&gt; /root/run_apache.sh
RUN chmod 755 /root/run_apache.sh

EXPOSE 80

CMD /root/run_apache.sh

Build docker image

docker build -t hello-world .

Verify image

docker images --filter reference=hello-world

Run image

docker run -t -i -p 80:80 hello-world

I added the -t -i flags to allow Ctrl+C to be used to interrupt docker run. See this GitHub issue for a more complete discussion.

test!

Summary

I obviously still have lots to learn, but this first look at Docker was relatively painless and has already given me a whole set of ideas to try out and incorporate into my development processes.


Featured image by Tim Easley @ unsplash.com

Opening Windows Command Prompt and PowerShell to current Explorer location

I recently came across an article (wish I still had the link) that mentioned how to quickly open a command prompt with the same working directory as the current folder in Windows Explorer.

All these years using Windows and it is still so easy to learn something new!

I recently came across an article (wish I still had the link) that mentioned how to quickly open a command prompt with the same working directory as the current folder in Windows Explorer.

For so long I have used the painful process of copying the current location from the Explorer address bar, opening a command prompt and typing in:

cd /d {folder path}

Solution

The solution is so obvious that it is a little painful to just be learning it now. 🙂

From the explorer window, just type cmd into the address bar and hit enter.

Bingo!

Powershell

Also works with PowerShell!

Nice!

Going in reverse

And of course, to go from a command window to Explorer to browse the current working directory of the command window, just type in explorer . into the command window and hit enter.

Browse away!

While I wish I had thought to try this years ago, I’m at least glad I know it now. It’s never to late to learn a new trick.