Tutorial: Full-stack JavaScript for the Enterprise Getting started with: Ext JS, Node.js, Express, MongoDB and Docker. (8)

Posted on in Docker Environment

This is the last part of this tutorial series, it will cover Github and the Docker Hub.

Github

Navigate to github, to add a new repository:
https://github.com/new

Create two git repositories:

  • docker-ext-client
  • docker-node-server
  • Add a .gitignore file to the following folders:
  • dockerextnode/client/
  • dockerextnode/server/dockerextdjango/

It should contain the following ignore rules:
https://gist.github.com/savelee/970c0d72195ed5b9ca7c5ca533d0a4de

Type for both folders, the following commands on the command-line:

$ git init
$ git status
$ git add .
$ git commit -m “First commit”
$ git remote add origin https://github.com/myrepo/myrepo.git
like: git remote add origin https://github.com/savelee/docker-ext-client.git
$ git push -u origin master --force

github

Docker Hub: Distribution of containers

Now that you’re reading this guide, you might be interested, or maybe you just want to see these examples working live. Well with Docker, you can very run these container images. In case you have the Docker Toolbox installed, this should be very easy. You just need to have access to my containers. Enter Docker Hub! Docker Hub is like Github but for Docker images.

The Docker Hub is a public registry maintained by Docker, Inc. It contains images you can download and use to build containers. It also provides authentication, work group structure, workflow tools like webhooks and build triggers, and privacy tools like private repositories for storing images you don’t want to share publicly.

Let me first show you how you can add your images to the Docker Hub, afterwards I will show you how to checkout these images.
First, we are going to add an Automated build repository in Docker Hub. For that, we first need to push the code to Github. If you followed this guide, you should have done this by now.

DockerHub

Adding images to Docker Hub

We will need to have a working images, which you will have when you have done the previous chapters.

Next, we will link our Github account with Docker Hub to add an automated build repo. You will need a Docker Hub account: https://hub.docker.com/login/

We will automate the Docker builds, by linking Github to Docker Hub, so everything I push to Git, it will automatically push to Docker as well. We can achieve this with webhooks.
Go to: https://hub.docker.com/account/authorized-services/

You can choose to link to Github or Bitbucket. See: https://docs.docker.com/docker-hub/github/
I’m using Github for this tutorial.

Choose between; public & private or limited access. The “Public and Private” option is the easiest to use, as it grants the Docker Hub full access to all of your repositories. GitHub also allows you to grant access to repositories belonging to your GitHub organizations. If you choose “Limited Access”, Docker Hub only gets permission to access your public data and public repositories.

I choose public & private, and once I am done with that, it forwards me to a Github page. (I’m logged in on Github), which asks me to grant permission, so Docker Hub can access the Github repositories:

authorize

Once you click Authorize application, you will see the DockerHub application in the Github overview: https://github.com/settings/applications

Now go back to your DockerHub dashboard, and click on the Create > Create Automated Build from the dropdown, which you will see next to your account name, in the top right:

automatedbuilds1

Select Create Auto-Build Github, select your Github account, and then select the repository:
docker-ext-client, enter a description of max 100 characters and save. Redo these steps as well for docker-node-server.

automatedbuilds2

Once the Automated Build is configured it will automatically trigger a build and, in a few minutes, you should see your new Automated Build on the [https://hub.docker.com/](Docker Hub) Registry. It will stay in sync with your GitHub and Bitbucket repository until you deactivate the Automated Build.

Now go to Build Settings. You should see this screen:

automatedbuilds3

You could click the Trigger button, to trigger a new build.

Automated Builds can also be triggered via a URL on Docker Hub. This allows you to rebuild an Automated build image on demand. Click the Active Triggers button.

Creating an automated build repo means that every time you make a push to your Github repo, a build will be triggered in Docker Hub to build your new image.

Make sure, when committing the docker-ext-client app to Git, that you will check in the production build/production/Client folder, as this folder will be used by the Docker images, not the folder with your local Sencha (class) files.

Running images from Docker Hub

Now that we know, how we can add Docker images to the Docker Hub, let's checkout some images.

First download the image from the Docker Hub:

$ docker pull savelee/docker-ext-client

Then run the new Docker image

--name = give your container a name
--p = bind a port to the port which is in the Dockerfile
-d = the image name you like to run

For example:

$ docker run --name extjsapp -p 80:80 -d savelee/docker-ext-client

Here’s the code for running the Docker container:

$ docker pull savelee/docker-node-server
$ docker run --name nodeapp -p 9000:9000 -d savelee/docker-node-server

Conclusion

The last part of the tutorial focussed on publishing Docker images to the Docker Hub. If you followed all the tutorials of this 8 series, you've learned the following:

  • Full stack JavaScript for the enterprise with JavaScript on the front-end (with Ext JS 6).
  • Node.js on the back-end
  • A NoSQL database with MongoDB and Mongoose
  • About Docker, and how to create containers
  • How to link Docker containers with Docker Compose
  • How to publish Docker images with Github and Docker Hub

The best part of this all, is that you can easily swap one technology for another. For example, I could link new Docker images, with Ext JS 6 on a Python/Django with MySQL environment, or an Angular 2 app on Node.js with CouchDB...

I hope you like it, and that this might come in handy.

Cheers!

Tutorial: Full-stack JavaScript for the Enterprise Getting started with: Ext JS, Node.js, Express, MongoDB and Docker. (7)

Posted on in Docker Environment

This is part VII of the tutorial, and covers: Docker Compose

Docker Compose: Linking containers

Docker Compose is a tool for defining and running multi-container Docker applications.

Docker is a great tool, but to really take full advantage of its potential it's best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become confusing.

The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team eventually decided to make their own version based on the Fig source. They called it: Docker Compose.
In short, it makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

So, with Docker Compose you can spin off various Docker images, and link it to each other.
That’s great, because in case you ever decide to get rid of the Node.js back-end, and instead like to make use of something else; let’s say Python with Django; you would just link to another images.

(For example: here's the same API back-end service, but build in Python with Django/Django Rest Framework:
https://github.com/savelee/docker-django-server)

You will use a Compose file (docker-compose.yml) to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.
For more information, see: https://docs.docker.com/compose/overview/


Remember, how we wrote in our client Sencha app, URLs to the Node.js back-end? We hardcoded it to the localhost URL. Now this won’t work. When the container is running, it won’t know localhost, only it’s own ip address.

Let’s figure out what the docker machine ip address is. While you are still in the Docker terminal, enter the following command:

$ docker-machine ip

We will now need to change the Sencha URLs. You could hardcode this to the Docker machine ip, or you could let JavaScript detect the hostname, you are currently using. (Remember, our Node server is on the same host as our Sencha app, it just has a different port.)

The live URL in the client/util/Constants.js needs to be changed to:

'LIVE_URL': window.location.protocol + "//" + window.location.host + ':9000',

You will need to build the Sencha app, before moving on with Docker. We will copy the Sencha build directory over to our container, and this one needs to be finalized, concatenated and minimized, to leverage performance while serving the page.
(Manually copying builds over to folders can be automated too, btw. Take a look in one of my previous posts: https://www.leeboonstra.com/developer/how-to-modify-sencha-builds/)

Navigate to the dockerextnode/client folder:

$ sencha app build classic
$ sencha app build modern

We’re going to run our MongoDB database and our Node.js back-end on separate containers as well. We can use official images for this. Node.js has an official Docker image: https://hub.docker.com/_/node/
And also MongoDB has its own Docker image: https://hub.docker.com/_/mongo/

The Node.js image, we will need to configure, because we need to copy over our own back-end JavaScript code. Therefore create one extra Dockerfile which we create in the server folder.
The contents will look like this:

server/Dockerfile:
https://github.com/savelee/docker-node-server/blob/master/Dockerfile

Once we are done with that, we can create our Docker composition, in the root of our dockerextnode folder:

Build with:

$ docker-compose up --build

After building the composition, you can quickly boot up all the containers in once with:

$ docker-compose up

Note:
By the way, to build and run this image on its own, using these commands:

$ docker build -t nodeserver .
$ docker run -d --name dockerextnodeserver -p 9000:9000 nodeserver

You can test it in your browser by entering the ip address plus /users:
http://192.168.99.100:9000/users

Now you can visit the application in your browser. You will need to figure out what the ip address is. Remember:

$ docker machine ip

For me it gives back this ip address: http://192.168.99.100/

You will need to create the first login credentials. Open Postman or use CURL:

$ curl -H "Content-Type: application/json" -X POST  -d '{ "username": "myusername", "password": "mypassword"  }' http://192.168.99.100:9000/register

For Postman:
- Choose the method: POST
- With the URL: http://192.168.99.100:9000/register
- Select the body tab
- create 2 x-www-form-urlencoded fields: username & password, also specify the values that belong to these fields.

Now you can test your application!

Woops. There’s a problem with this code. The Node.js server can’t connect to my MongoDB!
This is because it’s trying to connect to Mongo database on localhost, but our Mongo database isn’t on local machine. You could hardcode the container IP ofcourse, in your Node.js script, or you can use environment variables, which are automatiaclly added by Docker, when it links the container:

In server/libs/users/index.js, change the mongoose.connect line to:

mongoose.connect('mongodb://'+settings.mongoAddress+':'+settings.mongoPort+'/'+settings.dbName);

Open server/config/local_settings.js and change it to the below code, so it contains the environment variables:

module.exports = {
  "secret": "mysecret",
  "mongoAddress": process.env.MONGO_PORT_27017_TCP_ADDR || 'localhost',
  "mongoPort": process.env.MONGO_PORT_27017_TCP_PORT || 27017,
  "dbName": 'dockerextnode'
}

compose

That's awesome, you've now learned how to setup multiple Docker containers and link them together.
In our next tutorial, we will look into the distribution of containers.

READ THE NEXT PART

https://www.leeboonstra.com/developer/tutorial-full-stack-javascript-for-the-enterprise-getting-started-with-ext-js-node-js-express-mongodb-and-docker-8/

Tutorial: Full-stack JavaScript for the Enterprise Getting started with: Ext JS, Node.js, Express, MongoDB and Docker. (6)

Posted on in Docker Environment

This is part VI of the tutorial, and covers Docker.

Docker: Containerize your apps

A Docker container is similar to a virtual machine. It basically allows you to run a pre-packaged "Linux box" inside a container. The main difference between a Docker container and a typical virtual machine is that Docker is not quite as isolated from the surrounding environment as a normal virtual machine would be. A Docker container shares the Linux kernel with the host operating system, which means it doesn't need to "boot" the way a virtual machine would.

You can think of a Docker image as a complete Linux installation. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it's perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa). Docker containers are isolated from the host machine by default, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network.

docker

Docker containers run ephemerally by default, which means that every time the container is shut down or restarted it doesn't save its data — it essentially reverts to the state it was in when the container started.

First make sure you have the Docker properly installed on your machine.
Mac OSX users can follow this guide: https://docs.docker.com/engine/installation/mac/
Windows users this one: https://docs.docker.com/engine/installation/windows/
There are also various guides available to install Docker on Linux or cloud environments btw.

You will need to install the Docker toolbox. It includes the Docker terminal, the Docker Machine, Docker compose etc.

You can test if Docker is installed by running the following command:

$ docker -v
$ docker-machine version

After installing, start the Docker Quickstart Terminal application. It will take a while, but afterwards it opens another terminal window, with a message like this:


## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ docker is configured to use the default machine with IP 192.168.99.100 For help getting started, check out the docs at https://docs.docker.com MacBook-Pro-3:~ leeboonstra$

In this case, it will configure Docker on my workstation on this local IP address: 192.168.99.100.

Now, let’s create a Docker file: Dockerfile (note, it does not have an extension) and you will save it into the dockerextnode/client folder.

We will create a new Docker image, and base it on other Docker image, The official Nginx image. https://hub.docker.com/_/nginx/

Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The Nginx image will serve all our static content.

Here are the contents of the client/Dockerfile. See the comments for explanation:
https://github.com/savelee/docker-ext-client/blob/master/Dockerfile

To finally create the image, we need to run the following command from the dockerextnode/client folder, in the Docker terminal window:

$ docker build -t extclient .

Note: Because I migrated from the Boot2Docker command to the Docker Machine, I wasn’t able to build here. Instead I received the following error: “Cannot connect to the Docker daemon.” I had to run this line on my CLI first, before building. Which regenerate the TSL certificates for me.
$ docker-machine regenerate-certs default

To test if it worked run:

$ docker-machine env default

To see your newly created image, run the following Docker command:

$ docker images

You will see the images that are currently installed on your workstation. It could look like this:

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
extclient               latest              4ad898544bec        4 minutes ago  

The name extclient, is our Ext JS Docker image, since we specified this name in the build command.

To remove all images use:

$ docker rmi $(docker images -q)

To remove all containers including the running ones use:

$ docker rm --force `docker ps -qa`

To run a container using the image we just created run:

$ docker run -d --name dockerextnodeclient -p 80:80 extclient

You can test it in your browser by entering the ip address in the browser:
http://192.168.99.100

In case a Docker container automatically exits, because of an error, you might want to look into the logs:

$ docker logs 

For example:

$ docker logs 2f9236343def

We are running in the background a new container called: “dockerextnode”, which maps port 80 to the port that the Dockerfile exposes from the image named “extclient”.

Now the container is running. To see our app inside the container we need to know the ip of the Docker Machine:

$ docker-machine ip

To see running containers use:

$ docker ps -a

This works, but only for the front-end, not for our Node.js back-end and Mongo database. Of course, you could edit the Dockerfile, and create Docker RUN commands, to install Node.js and Mongo on this image. However, that would be a bit silly, and it would take the magic powers of Docker away.
A much better approach, would be to create separate images for Sencha, Node.js and for MongoDB. That's why Docker Compose comes into play... We will look into that, in the next part of the tutorial.

READ THE NEXT PART

https://www.leeboonstra.com/developer/tutorial-full-stack-javascript-for-the-enterprise-getting-started-with-ext-js-node-js-express-mongodb-and-docker-7/

Tutorial: Full-stack JavaScript for the Enterprise. Getting started with: Ext JS, Node.js, Express, MongoDB and Docker. (1)

Posted on in Docker Ext JS 6 JavaScript Node JS

This is part I of the tutorial, and covers: JavaScript on the client.

Yeah you are correct, when you have a web project, optimized for production, you can use FTP and simple upload the folder on your server. That’s easy, and when you only have a simple HTML5/JS/CSS app, or an application with a PHP back-end on Apache this probably works fine for you. But what, if you have a very complex application, or you are working on an application with a large team? You probably want to automate as much as possible, and make every live build easy.

This tutorial will show you, how you can create an app where we will use JavaScript on the client (an Ext JS 6 app), and JavaScript on the server (Node.js with Express). Maybe you have played around with Node.js before. When you configured a Node.js app with Express etc, you will probably need to install packages via the the NPM packages manager. These are all dependencies. Now imagine you’ve created on your local workstation a fully working back-end, with Node.js and a MongoDB database. You had to install a lot of packages, and make some configurations on your system. This can be a configuration where you save environment passwords, or maybe even hardware configurations.
What you don’t want, is to manually replicate all the settings and configurations you made locally, again on the server. Ideally, you will take whatever you have on your local machine, and take that over. Maybe you even want to run the same operating system and hardware on production. This is where Docker comes into play.

With Docker you can create an isolated container with all the files such as dependencies and binaries for your app to run, making it easier to ship and deploy. It simplifies the packaging, distribution, installation and execution of (complex) applications.
So, what is an isolated container? These containers are self-contained, preconfigured packages that a user can fetch and run with just a single command via the Docker hub (like Github, but for Docker). By keeping different software components separated in containers they can also be easily updated or removed without influencing each other.

About Docker

With Docker you can create an isolated container with all the files such as dependencies and binaries for your app to run, making it easier to ship and deploy. It simplifies the packaging, distribution, installation and execution of (complex) applications.
So, what is an isolated container? These containers are self-contained, preconfigured packages that a user can fetch and run with just a single command via the Docker hub (like Github, but for Docker). By keeping different software components separated in containers they can also be easily updated or removed without influencing each other.

What you will need:

For this tutorial I used: Ext JS 6 and Cmd 6.0.2, Node.js 5.8 with NPM installed and Docker 1.10.

Please install the following:

Ext JS 6: Create the client app

Create the following folder somewhere on your hard drive: dockerextnode.

Put a temporary copy of the Sencha SDK inside dockerextnode, (for example ext-6.0.2). If you don’t have Ext JS yet, feel free to download the trial: https://www.sencha.com/products/evaluate/

Open Windows Command / Terminal, and navigate on the command-line to the dockerextnode folder. From there enter the following commands:

$ mkdir server
$ cd ext-6.0.2
$ sencha generate app Client ../client

You’ve now created 2 folders. The server* folder, which will contain the Node code later and the **client folder, which contains the copy of the Sencha SDK together with a demo app.

Let’s remove the temp folder:

$ cd ..
$ rm -Rf ext-6.0.2

You’ve now removed the temp. Sencha SDK folder. We can now, start testing our Sencha demo app:

$ cd client
$ sencha app build production
$ sencha app watch

This command will spin off, a Jetty server on http://127.0.0.1:1841. Visit this page in the browser, and confirm you see the Sencha demo app. Once, you’ve seen the demo app, we can stop the server, by stopping the sencha app watch (with CTRL + C for example). We will keep the demo app like it is, but this could be a nice starting point for you, when you want to create your own app.

NOTE:
By default, the sencha app watch command starts the development server on the internal IP at port 1841. If you want to change the server’s port, for example to port 8082, you will have to start the server via the web command. This command will only bootu p an internal server, and won’t “watch” your app for changes. $ sencha web -port 8082 start

Want to checkout all my code? I hosted it on Github:
https://github.com/savelee/docker-ext-client

By the end of this part of the tutorial, you will have a working JavaScript client app, created with Sencha Cmd and Ext JS 6. The next part of this tutorial will cover the setup for creating a Node.js with Express app.*

 

ExtJS6-demo-app

Read the next part

https://www.leeboonstra.com/developer/tutorial-full-stack-javascript-for-the-enterprise-getting-started-with-ext-js-node-js-express-mongodb-and-docker-2/