Skip to main content


Working With Gulp and PM2

Gulp is one of the popular task runner tools which can be integrated and works well with various helper tools for code development. I've previously created a post about how Gulp can be interoperated with Nodemon . In this post, I try to show how it can work along with PM2. PM2 is a powerful process manager that can run on multiple platforms and support a variety of technologies. PM2 is quite different from Nodemon. Nodemon specifically only focuses on monitoring of Node.js application. On the other hand, PM2 is a process manager with rich features for maintaining many application processes in a time, even with support for clustering mechanism. Besides, PM2 is usually implemented as a daemon program, so we will need a different approach to integrate it with Gulp. For instance, we use Gulp to do some stuff such as linting, translating, or file copying after each code changes. After Gulp runs its main tasks, we will instruct Gulp to restart or stop the application process which is

Levi Ackerman, Nothing Left

Who is your favorite character in Attack on Titan?

Specify Different Certificates To Access Different Git Repositories

Besides HTTPS, we can make a connection to a Git repository using SSH. For authentication, we should store our public key on the remote host, and set the remote Git URL on our host to the correct address for SSH connection. For example: git remote set-url myremote ssh:// Common SSH client tools provide a specific parameter to set which private key should be used for authentication. For example, the OpenSSH client provides -i parameter to specify the location of the private key that will be used. Meanwhile, common Git client tools may not provide it. When we run a Git command, the tool will look for a private key stored in the default directory which is ~/.ssh/id_rsa . We can resolve this issue by setting up a configuration file that will be stored in ~/.ssh/config . For instance, this is a sample of a configuration file. Host myremotegit HostName User git IdentityFile C:\\keys1\\id_rsa IdentitiesOnly yes Host bitbucket-com

Create A GraphQL Server Using Express and Apollo Server

GraphQL can be described as a query language for API. GraphQL was initiated by Facebook developers when they tried to build a better data fetching mechanism for their mobile application. By using GraphQL, frontend developers can request any data from the backend server with specified format and properties based on their actual needs. Creating a server that has support for handling GraphQL-based requests has become easy nowadays. Express as a de-facto framework for building HTTP server has the capability to be integrated with Apollo Server which is a popular GraphQL server. Minimal modules that we need are express , graphql , and apollo-server-express . For instance, we will set up a project and build the requirements. We utilize ESM syntax for building the sample program. Firstly, we initiate the project and install the dependencies. mkdir express-graphql cd ./express-graphql yarn init -y yarn add express graphql apollo-server-express Because we use ESM syntax, we need to set the

Running Docker Command Inside A Docker Container

But, why do we need to do this? There are several occasions that make you want to perform this action, especially if you are working on the development of a continuous delivery procedure. For example: You want to set up a closed environment (like a container) for a software testing process that requires external applications which can be run as containers. You want to build an application in a container then deploy it on another host using Docker API only, without the need for shell command execution. There are two common methods to achieve this objective. The first is by binding the Unix socket of the running Docker Engine into the container. The second is by installing a specific Docker Engine inside the container. For instance, we will run a container based on an image of Docker 20.10. We can run the following command. docker run -v /var/run/docker.sock:/var/run/docker.sock -it --rm docker:20.10 Now, you can run any  docker command inside the container that you just have

Utilizing Worker Thread in Node.js

The worker thread module has already become a stable module on Node.js version 12. This module enables us to run multiple Node.js processes in parallel using thread. In the past, we couldn't do this easily. We probably ended up utilizing cluster or spawning child process. The difference in utilizing thread is that we have shareable resources (memory). The main and its child threads can communicate and pass the operation results directly with a concept of message passing. Child thread is usually used for distributing computation load in an application. The main thread may migrate a certain process to a child thread which is run a computational-expansive and asynchronous process. For instance, the following code shows how we can create a worker thread for a file reading process then send the result to the main thread. // module.js const { Worker, isMainThread, parentPort, workerData } = require('worker_threads'); if (isMainThread) { // if it is accessed as main threa

Setting Up Docker Context

When we want to run a container on a remote Docker Engine host, we can utilize the context feature of Docker. Context allows us to maintain information of several Docker Engine hosts to be remotely accessed from our local Docker Engine host. Adding the record is done by running the following command. docker context create yourContextName --docker "host=ssh://" The connection utilizes SSH protocol so that we need to generate keys for establishing communication with the remote host. After storing the public key value on the remote host, we can spawn a new SSH agent on the current session on our host and add the private key into the agent. eval $(ssh-agent -s) cat /path/to/private/key | ssh-add - Before we can access the remote Docker API, we need to add the remote keys information to our ~/.ssh/know_hosts file by making an SSH connection for the first time or using ssh-keyscan . Now, we can access remote Docker API by specifying the context on the loc

Levi vs Beast Titan

 ... or it is Beast Levi vs Titan

Enabling Imagick to Read or Manipulate PDF File

Imagick is one of the popular tools for manipulating image files. Some popular languages such as PHP and Node.js have provided libraries that can be used for manipulating images based on Imagick. One of the common use cases for using Imagick is for generating a thumbnail from an image or PDF file. In PHP, we can install the PHP Imagick module by running the following command. apt install php-imagick Then, we can verify the installation by running this command. php -m | grep imagick For example, we want to generate a thumbnail image for a PDF file in PHP. We can use the following script. <?php $im = new Imagick(); $im->setResolution(50, 50); // set the reading resolution before read the file $im->readImage('file.pdf[0]'); // read the first page of the PDF file (index 0) //$im = $im->flattenImages(); // @deprecated // handle transparency problem $im = $im->mergeImageLayers( Imagick::LAYERMETHOD_FLATTEN ); $im->setImageFormat('png'); $im->write

Securing Redis to Be Accessed From All Interfaces

Redis can bind to all interfaces with bind * -::* configuration. But, Redis also enables protected-mode by default in its configuration file. It will make bind * -::* configuration ineffective because the protected-mode requires both to explicitly state the binding interfaces and to define user authentication. The unsecured way is to set protected-mode no in the configuration. It will make our Redis server becomes accessible from any interfaces without authentication. It may be fine if we deploy our Redis server in a closed environment such as in a containerized one without exposing and pointing any port to the Redis service port. So that, the service can only be accessible from other services in the container's network. The recommended way is to keep protected-mode yes in the configuration. Then, we need to add a new user authentication configuration and limiting access for the default user. A default user is a user with no assigned name when the client tries to connect