Skip to main content


Running Docker Command Inside A Docker Container

But, why do we need to do this? There are several occasions that make you want to perform this action, especially if you are working on the development of a continuous delivery procedure. For example: You want to set up a closed environment (like a container) for a software testing process that requires external applications which can be run as containers. You want to build an application in a container then deploy it on another host using Docker API only, without the need for shell command execution. There are two common methods to achieve this objective. The first is by binding the Unix socket of the running Docker Engine into the container. The second is by installing a specific Docker Engine inside the container. For instance, we will run a container based on an image of Docker 20.10. We can run the following command. docker run -v /var/run/docker.sock:/var/run/docker.sock -it --rm docker:20.10 Now, you can run any  docker command inside the container that you just have

Utilizing Worker Thread in Node.js

The worker thread module has already become a stable module on Node.js version 12. This module enables us to run multiple Node.js processes in parallel using thread. In the past, we couldn't do this easily. We probably ended up utilizing cluster or spawning child process. The difference in utilizing thread is that we have shareable resources (memory). The main and its child threads can communicate and pass the operation results directly with a concept of message passing. Child thread is usually used for distributing computation load in an application. The main thread may migrate a certain process to a child thread which is run a computational-expansive and asynchronous process. For instance, the following code shows how we can create a worker thread for a file reading process then send the result to the main thread. // module.js const { Worker, isMainThread, parentPort, workerData } = require('worker_threads'); if (isMainThread) { // if it is accessed as main threa

Setting Up Docker Context

When we want to run a container on a remote Docker Engine host, we can utilize the context feature of Docker. Context allows us to maintain information of several Docker Engine hosts to be remotely accessed from our local Docker Engine host. Adding the record is done by running the following command. docker context create yourContextName --docker "host=ssh://" The connection utilizes SSH protocol so that we need to generate keys for establishing communication with the remote host. After storing the public key value on the remote host, we can spawn a new SSH agent on the current session on our host and add the private key into the agent. eval $(ssh-agent -s) cat /path/to/private/key | ssh-add - Before we can access the remote Docker API, we need to add the remote keys information to our ~/.ssh/know_hosts file by making an SSH connection for the first time or using ssh-keyscan . Now, we can access remote Docker API by specifying the context on the loc

Levi vs Beast Titan

 ... or it is Beast Levi vs Titan

Enabling Imagick to Read or Manipulate PDF File

Imagick is one of the popular tools for manipulating image files. Some popular languages such as PHP and Node.js have provided libraries that can be used for manipulating images based on Imagick. One of the common use cases for using Imagick is for generating a thumbnail from an image or PDF file. In PHP, we can install the PHP Imagick module by running the following command. apt install php-imagick Then, we can verify the installation by running this command. php -m | grep imagick For example, we want to generate a thumbnail image for a PDF file in PHP. We can use the following script. <?php $im = new Imagick(); $im->setResolution(50, 50); // set the reading resolution before read the file $im->readImage('file.pdf[0]'); // read the first page of the PDF file (index 0) //$im = $im->flattenImages(); // @deprecated // handle transparency problem $im = $im->mergeImageLayers( Imagick::LAYERMETHOD_FLATTEN ); $im->setImageFormat('png'); $im->write

Securing Redis to Be Accessed From All Interfaces

Redis can bind to all interfaces with bind * -::* configuration. But, Redis also enables protected-mode by default in its configuration file. It will make bind * -::* configuration ineffective because the protected-mode requires both to explicitly state the binding interfaces and to define user authentication. The unsecured way is to set protected-mode no in the configuration. It will make our Redis server becomes accessible from any interfaces without authentication. It may be fine if we deploy our Redis server in a closed environment such as in a containerized one without exposing and pointing any port to the Redis service port. So that, the service can only be accessible from other services in the container's network. The recommended way is to keep protected-mode yes in the configuration. Then, we need to add a new user authentication configuration and limiting access for the default user. A default user is a user with no assigned name when the client tries to connect

Managing Node.js-based Web Project Using Gulp and Nodemon

In building a website, there are two main components which are frontend and backend components. If we build a website based on Node.js for the backend side, and of course Javascript and CSS for the frontend side, we should handle our codes in the project differently. We may perform a linter and the Typescript transpiler for our Node.js codes. While on the frontend side, we may additionally minify and bundle the project styles and scripts. The backend program needs to be restarted when there are any changes in the codes and transpiler is performed. The frontend codes need to be re-bundled when there are also any changes. Nodemon is a tool that is designed to restart a Node program when a change of the program codes is detected. Gulp is a task runner that can be utilized to watch any changes in the program codes and perform specific tasks. In this post, we will make a transpiler is run and the backend program is restarted when there are any changes. We will also compile our Sass-

Several Useful Linux Tools

The following tools may have been installed in your Linux because some are basic tools. But, if we installed any Linux distribution from the Docker registry which is shipped with only minimal programs, these following tools may be not available by default. net-tools This tool provides tools for network-related tasks such as ifconfig . software-properties-common If you want to enable add-apt-repository command, this tool is required. nano This text editor is usually already available. ca-certificates A deb package that contains certificates provided by the Certificate Authorities. It also contains an updater tool that can be used as a cronjob if needed. gnupg2 GNU Privacy Guard is GNU's tool that can be used to encrypt data and to create digital signatures. GnuPG is a complete replacement for PGP. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard. openssh-client Tools for generating authentication keys and

Persisting Data and Replication in Redis

As we know, Redis is an in-memory key-value store database. If our data is stored in our host memory (RAM), how can we restore all values from the last state of our system in case of system reboot or power outages? Redis provides two options for persisting our data. The first is by creating a snapshot and the second is by appending each write action into a file. The second is also called the append-only-file (AOF) method. Applying those options is as trivial as updating several lines of the Redis configuration file. Redis performs snapshotting with certain rules by default. Enabling the auto-snapshot method with different rules is done by configuring the following lines in the /etc/redis/redis.conf file. save 300 10 save 30 1000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/lib/redis The line save 300 10 means snapshot will be automatically updated in the background if at least 10 writes have occurred within 300 seconds. The li

Utilizing HTTP/2 Push for Faster Page Load in Node.js

HTTP/2 has several advantages over HTTP/1 that I've mention in my earlier post . In this post, I want to show how push-request can be performed using Node.js to create an HTTP/2 server. Push request is used to push static files such as scripts and styles so that the client can consume those static files as soon as possible without the need to request them first. In this example, several built-in Node modules are required and an external module for ease of content-type setting named mime . Let's install it first. npm init npm i --save mime HTTP/2 encodes all headers of a request and it presents several new headers for identifying a request such as :method and :path . For more clarity, I call some constants related to the HTTP/2 header from the http2.constants property. Let's create the server.js file. const http2 = require('http2'); const { HTTP2_HEADER_PATH, HTTP2_HEADER_METHOD, HTTP2_HEADER_CONTENT_TYPE, HTTP2_HEADER_CONTENT_LENGTH, HTTP2_