Skip to main content

Posts

Showing posts from 2022

Configuring Swap Memory on Ubuntu Using Ansible

If we maintain a Linux machine with a low memory capacity while we are required to run an application with high memory consumption, enabling swap memory is an option. Ansible can be utilized as a helper tool to automate the creation of swap memory. A swap file can be allocated in the available storage of the machine. The swap file then can be assigned as a swap memory. Firstly, we should prepare the inventory file. The following snippet is an example, you must provide your own configuration. [server] 192.168.1.2 [server:vars] ansible_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa Secondly, we need to prepare the task file that contains not only the tasks but also some variables and connection information. For instance, we set /swapfile  as the name of our swap file. We also set the swap memory size to 2GB and the swappiness level to 60. - hosts: server become: true vars: swap_vars: size: 2G swappiness: 60 For simplicity, we only check the exi

Hiko Talks With Kenshin

 It was when Kenshin will leave for wandering.

HTTP Compression

When a server delivers some messages to a client using an HTTP protocol, data compression may be performed to save bandwidth. It can make the data size becomes smaller with the cost of CPU processes. Node.js has a built-in library to handle compression, as I mentioned in another post . For instance, we will see an HTTP server built on Fastify that utilizes the zlib module to compress data returned by the server module in the following code. import Fastify, { FastifyInstance } from 'fastify'; import { join } from 'path'; import { createReadStream } from 'fs'; import zlib from 'zlib'; const PORT = 3000; const fastify: FastifyInstance = Fastify({ logger: true }); fastify.get('/', (request, reply) => { // get request header const acceptEncoding = request.headers['accept-encoding'] || ''; const rawStream = createReadStream(join(process.cwd(), 'text.txt')); reply.header('Content-Type', 'text/plain

Configuring Network Interface on Red Hat Server

For this instance, we use a Red Hat Enterprise Linux (RHEL) Server 7.9. Unlike the latest Ubuntu version that commonly utilizes YAML-based configuration and the netplan tool for setting up IP addresses of network interfaces, the RHEL network interface can be configured using the configuration script located in /etc/sysconfig/network-scripts . Several interface configurations and scripts have been automatically generated by RHEL during the installation process. The configuration file is named by its device name such as ifcfg-eth0 for the eth0 device. If we run our RHEL in a VMware-based virtual machine, the device name might be shown up like ens35 or such things. The following configuration is an example of a network configuration with DHCP enabled. #/etc/sysconfig/network-scripts/ifcfg-eth0 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="dhcp" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes&q

Deploying Network Infrastructures in AWS Using CloudFormation

AWS is undoubtedly the most complete cloud services provider. Even though its services are not always the best compared to other similar services, it is able to provide a variety of tools to help us build any kind of internet-based services. When we initially create an account in AWS, we instantly receive the ability to create a complex network within a Virtual Private Cloud (VPC). We can develop a VPC in a region on multiple data centers or availability zones. AWS allows us to configure and deploy our infrastructures using an IaC-based (Infrastructure as Code) service called CloudFormation. For instance, we will deploy a VPC with several network components in it. The components include internet gateway, subnet, NAT gateway, and routing tables. The VPC will be available in a single availability zone and hosts two subnets which are private and public. Firstly, we define the variables that will be referred to in the configuration within the Parameters block. It contains only the

Shikaku Talks To Shikamaru

It was after Asuma lost the battle with Akatsuki.

Phases of Node.js Event Loop

We know that a Node.js application is run in a single-threaded fashion. But, it can handle multiple asynchronous operations in the background and exhibits features as if it is a multi-threaded application. At the base level, Node.js is built on C++ which actually allows the existence of multiple threads. While that's not the actual basis, Node.js utilizes the  libuv library that allows it to interact with the operating system and utilize available resources efficiently. The library enables asynchronous I/O operations such as file reading, database querying, data transferring over the network, and so on, then it will trigger the registered callback for each completed I/O operation to run. Node.js manage all the callbacks in a mechanism called " event loop ". An event loop is a loop of sequential processes which are grouped into several phases. It handles callbacks of asynchronous I/O operations and asynchronous calls initiated by objects or functions in the main appl

Configure Gitlab SMTP Setting

Gitlab CE or EE is shipped with the capability to send messages through SMTP service as the basic feature to send notifications or updates to the users. The configuration parameters are available in /etc/gitlab/gitlab.rb . Each SMTP service provider has a different configuration, therefore the Gitlab configuration parameters should be adjusted according to the requirements. Some examples have been provided by Gitlab here . This is an example if you use the Zoho service. gitlab_rails['smtp_enable'] = true gitlab_rails['smtp_address'] = "smtp.zoho.com" gitlab_rails['smtp_port'] = 587 gitlab_rails['smtp_authentication'] = "plain" gitlab_rails['smtp_enable_starttls_auto'] = true gitlab_rails['smtp_user_name'] = "gitlab@mydomain.com" gitlab_rails['smtp_password'] = "mypassword" gitlab_rails['smtp_domain'] = "smtp.zoho.com" This is another example of using Amazon SES w

Measuring Correlation

If we want to know a correlation between a boxer winning a match and using red pants, we can mathematically determine it based on past records of those conditions. There are four possible combinations of the conditions. The boxer loses and he doesn’t use red pants (00) The boxer loses and he uses red pants (01) The boxer wins and he doesn’t use red pants (10) The boxer wins and he uses red pants (11) For example, we have the following past records. Condition “00” is 23 times Condition “01” is 10 times Condition “10” is 45 times Condition “11” is 9 times Then, we can use the phi coefficient formula. The result of this formula is between -1 and 1. If the result is close to 0, it means the conditions have no correlation. If it is close to 1, it means the conditions are strongly correlated. Meanwhile, if it is negative, the correlation is strong but in the opposite way. The formula is as follows. phi = (n11 x n00) – (n01 x n10) / sqrt(n1X x n0X x nX1 x nX0) n1X represents the condition wh

Elements of UI Design

There are some essential elements in designing a user interface. I found that the following items are necessary to bring up a certain feeling, thought, or perspective for the user of our product. Color Color can shape a mood for the user. Combining multiple colors should be carefully picked to create a comfortable look. One of the tools available online to pick colors is here . Positive and Negative Space Positive space is the element containing the main content or information. Meanwhile, negative space is the distance between main elements or empty space around them. Each main element should have distance or space from each other to nicely navigate the focus of the users from one information to another. Typography Typography includes selecting appropriate fonts with good looks and aligning text content on the page. Microcontent We need to provide some short texts or content on a page to make it easy for the user to skim the information on th

First Appearance of Titan Eren

 When Mikasa was trying to grasp herself, a titan appeared killing the other titans.

Downloading A Large File From Google Drive

I just found a helpful answer from Quora about downloading a large file from Google Drive. It utilizes Google Drive API for downloading the file. The steps are as follows. Get the file ID from the shareable link URL. For example, the URL has a format similar to this: https://drive.google.com/file/d/THE_FILE_ID/view?usp=sharing , then the file ID is THE_FILE_ID . Go to Google OAuth 2.0 Playground . On this page, we can get an access token for utilizing Google API to download our file. Select the Drive API v3 and authorize it. Then, keep the access token that is available on the "Exchange authorization code for tokens" tab. Last, we can utilize curl to download the file using a terminal. curl -H "Authorization: Bearer THE_ACCESS_TOKEN" https://www.googleapis.com/drive/v3/files/THE_FILE_ID?alt=media -o THE_OUTPUT_FILE.ext

Utilizing Docker Secret

It likely happens that we need to provide some secrets like passwords, keys, or private things into our Docker containers. If we use the docker-compose tool to generate our containers, we basically can put the secrets as environment variables in the docker-compose.yaml file. But, what if we want to share our configuration with the others, they will find those secrets too. So, for overcoming that issue, we can utilize a feature provided by Docker itself which is the Docker secret, or we can just call it secret . For instance, we want to build a container for the PostgreSQL database. The PostgreSQL image allows us to set a custom database password by providing a value for the POSTGRES_PASSWORD or POSTGRES_PASSWORD_FILE variable. services: postgres: image: postgres environment: - POSTGRES_PASSWORD=$3cureP4ssword Or, we can utilize the Docker bind-mounting to store a file and instruct the Docker to read the secret information from the mounted file. services:

Principles of UX Design

If we want to elaborate a user experience (UX) design for our product, we need to understand some principles of a good UX design. Our design should be based on specific reasons and have certain intentions for our users and ourselves as product owners or as a business. These are several principles that should become our consideration in generating the design. Usefulness We build a product with a desire for providing solutions for any specific needs of the users. So, it is important to make sure that any design elements are provided to make the users completely gain the benefit of using our product and solve their problems. Usability Visions and our good intentions are not enough to build a great product if we don't provide clear ways for the users to take the most benefit of the product. Usability is a focus to provide easiness for the users in utilizing our product. Desirability Why are some products or services instantly abandoned by their users

When Mikasa Got So Emotional But Had To Fight

 She just tried to suppress her feeling.

Generate API Documentation Using Swagger Module in NestJS

Swagger provides us a standard to generate API documentation based on the Open API specification. If we use NestJS for building our API providers, we can utilize a tool provided by NestJS in the  @nestjs/swagger  module to generate the documentation automatically in the built time. This module also requires the swagger-ui-express module if we use Express as the NestJS base HTTP handler. Set Swagger configuration First, we need to define Swagger options and instantiate the documentation provider on the main.ts file. import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger'; // sample application instance const app = await NestFactory.create(AppModule); // setup Swagger options const options = new DocumentBuilder() .setTitle('Coffee') .setVersion('1.0') .setDescription('Learn NestJS with coffee') .build(); // build the document const document = SwaggerModule.createDocument(app, options); // provide an endpoint

Building Reducer Using Redux Toolkit

Redux Toolkit is one of the libraries that can help us manage our application state which depends on Redux. It allows us to create reducers within our application and separate each reducer from one another based on any criteria that we desired in a specific kind of module. There are some approaches that we can utilize to build reducers in Redux Toolkit. Basic Shape const slice = createSlice({ // ... reducers: { increment: (state) => { state += 1; } } }); On the code above, it seems that we mutate the state, but actually, Redux Toolkit utilizes Immer in the backstage so it will be translated into immutable operation. Then, unlike legacy libraries that require us to manually define the action that will be dispatched to be handled by a specific reducer, Redux Toolkit can generate action creators for us. const { increment } = slice.actions; // "increment" is the action creator Then, we can dispatch the action inside our compone

Double Invoking In React Application with Strict Mode Enabled

When we use create-react-app to generate our application source code, we will be provided with a base code that enables  React.StrictMode by default. This wrapper component doesn't render an HTML element but it performs useful functionality to verify that our application is safe for production. It checks for legacy or deprecated React functions or APIs, unexpected side-effects, and unsafe lifecycle. We can read the details about those functionalities on its documentation page . To help us spot side-effects, React will double-invoking several functions in our application such as render , constructor , functions passed in useMemo or useReducer , and so on. By double-invoking such functions, we can detect if unexpected results will show up. But, this process will be initiated only in development mode. When we ship our application in a production environment with Strict Mode enabled, double-invoking will never happen. This is why when we render a component that contains useEffect

Tokyo - Yui

 

Limiting Bitrate and Network Throttling

We may limit incoming or outcoming data rates to/from our infrastructure to maintain the stability of our service for customers. Bitrate limitation is an action to limit the number of bits that can be passed through a transmission channel in a period of time. Network throttling is an intentional action to slow down transmission speed in a network channel. It is not only about limiting bitrate but also limiting the allowed number of requests in a period of time. There are several tools and techniques that can be used to apply bitrate limitation and network throttling. Wondershaper It is an easy-to-use tool for Linux and is already in the package repository. It can limit the bit rate that can be achieved by network interfaces in the system. We can install it by running the following command. apt install wondershaper We can choose an interface to have a limitation either or both on download and upload. wondershaper <interface-name> <download-rate-in-bps> <upload-

Handle File Upload with Express and Multer

Express undoubtedly has become a popular framework for building web applications based on Node.js. It is shipped with support for handling file uploading using middleware that takes the user requests, parses the contents for the files, and provides the next handler with the files information. The following snippet shows a basic example of handling files uploading in Express using Multer. const multer = require('multer'); const storage = multer.diskStorage({ destination: (req, file, cb) => { // store files to "uploads/" directory cb(null, 'uploads/'); }, }); const upload = multer({ storage }); // initiate an upload handler that can accept multiple fields and multiple files const uploadHandler = upload.fields([ { name: 'galleryImages', maxCount: 10 }, { name: 'userFiles', maxCount: 2 }, ]); app.post('/upload', uploadHandler, (req, res) => { res.json(req.files); }); The sample above can be used if we just

Object Storage Comparison of DigitalOcean, Linode, and UpCloud

In this post, I want to show a test result of object storage services provided by DigitalOcean, Linode, and UpCloud. The result shows the average durations required to download/upload files from/to their cloud infrastructures. But, before we go into the test detail, let us overview the user features provided by each cloud provider. DigitalOcean has the most advanced features compared to the rest. By utilizing its user dashboard, we can set a custom domain for the CDN of object storage, customize permission per object, and custom CORS headers. UpCloud has a more straightforward interface but with a unique concept for creating separate bucket domains (directories) in a single object storage subscription. Linode has the simplest features in its interface but it provides CLI tools for managing the buckets and stored objects. Now, let us go back to the main topic. For the test, I deploy an AWS EC2 server based in Tokyo as a tester node. Then, I set separate object storages on DigitalOcean,

Zoro vs Mihawk

An old fight and the first encounter with Mihawk. " Scars on the back are the swordman's shame "

Managing MongoDB Records Using NestJS and Mongoose

NestJS is a framework for developing Node.js-based applications. It provides an additional abstraction layer on top of Express or other HTTP handlers and gives developers a stable foundation to build applications with structured procedures. Meanwhile, Mongoose is a schema modeling helper based on Node.js for MongoDB. There are several main steps to be performed for allowing our program to handle MongoDB records. First, we need to add the dependencies which are @nestjs/mongoose , mongoose , and @types/mongoose . Then, we need to define the connection configuration on the application module decorator. import { MongooseModule } from '@nestjs/mongoose'; @Module({ imports: [ MongooseModule.forRoot('mongodb://localhost:27017/mydb'), ], controllers: [AppController], providers: [AppService], }) Next, we create the schema definition using helpers provided by NestJS and Mongoose. The following snippet is an example with a declaration of index setting and an o

Backup and Restore PostgreSQL Database

PostgreSQL has been shipped with a lot of tools for ease of managing the database. Some actions that we typically perform in managing databases are backing up a database and restoring it to another database server. In PostgreSQL, we can utilize pg_dump to back up the data from a database and psql or pg_restore to restore the data into a new database. Generating A Backup File There are several available formats for the exported data: plain, directory, compressed, and custom. The plain format is using plain SQL syntax to export the data. To make the tables are exported separately, we can utilize directory format. The custom format is utilizing a built-in compression mechanism and results in an optimized binary formatted file. It is suitable for a large database. If we use a custom format, we can only restore it using pg_restore , but we have the ability to selectively choose desired tables to be restored. The following command is used to generate a plain formatted file with th

Resize VirtualBox LVM Storage

VirtualBox is a free solution to host virtual machines on your computer. It provides configuration options for many components on our machine such as memory, storage, networking, etc. It also allows us to resize our machine storage after its operating system is installed. LVM is a volume manager in a Linux platform that helps us to allocate partitions in the system and configure the storage size that will be utilized for a specific volume group. There are some points to be noticed when we work with LVM on VirtualBox to resize our storage. These are some steps that need to be performed. 1. Stop your machine before resizing the storage. 2. Set new storage size using GUI by selecting " File > Virtual Media Manager > Properties " then find the desired virtual hard disk name that will be resized. OR , by running a CLI program located in " Program Files\Oracle\VirtualBox\VBoxManage.exe ".  cd "/c/Program Files/Oracle/VirtualBox" ./VBoxManage.exe list

Best Action & Thriller Movies

Here are some latest action and thriller movies that I think very recommended to watch.  Nobody (2021) A guy who seems like a special agent veteran has to back to his old job that is getting rid of some bandits. In the beginning, we see an ordinary man who runs his boring routines with his family. Until a couple of robbers broke into his house to take some worth in it but accidentally stole his daughter's kitty bracelet. Yeah, just because of the cheap bracelet and getting trouble with drunk guys on the way back home, he has to face bigger trouble. This movie is written by one of John Wick's writers, maybe that's why the main guy seems to be so badass. Don't Breathe 1 (2016) & 2 (2021) A war veteran who is blind and lives alone is forced to be a horrific man. Some intruders who broke into his house, make everything change. In 2016, the intruders want his treasure but end up finding something creepy in the basement. In 2021, he has to fight some bad guys who want to