Skip to main content


Showing posts from 2023

OWASP Top 10 Security Threats

The Open Worldwide Application Security Project (OWASP) is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the fields of IoT, system software and web application security (Wikipedia). In this post, I want to share the top 10 security threats published by OWASP. The list is regularly updated by OWASP, the following list is based on 2021 publication. Broken Access Control This security issue can be caused by many things such as violation of the least privilege principle, unprotected API endpoints, use of unique identifiers without permission checking, and so on. There are several threats related to broken access control. Insecure Direct Object Reference . It happens when an application provides direct access to objects based on user-supplied input. For example, after submitting a form, the endpoint returns an ID of the object being submitted while with that ID, a user can access the

End of Garden of Words

 It is rain

Tools To Help You Create API Documentation

Nowadays, many paid and free tools can help us make beautiful API documentation for our software projects. These are a few of them with an explanation of each advantage. RapiDoc This open-source tool can generate API documentation based on OpenAPI specifications. So, if you already use the Swagger tool to generate your documentation, you can use the configuration and generate a new documentation page instantly. To use RapiDoc, we can create an HTML file that includes the Javascript library provided by RapiDoc or we can include it in a Javascript framework like React and Vue. It is very customisable, we can add custom HTML or markdown in the generated documentation, apply a dark theme or custom style, create custom methods, and many more. It also supports in page console to try an API request. ReadMe It is a service that can transform static API documentation into interactive developer hubs. Developer hubs mean it can mon

What Is The Importance of Software Architect

Someone said, "Architecture is about anything important,  whatever it is." But, what software architecture really is. Four components construct a foundation of software architecture.   Structure It is more like an  architectural style such as monolithic, microservices, layered, etc. So, if an architect talks about microservice, for example, he just talks about the software structure he is building. Characteristic It defines the  success criteria of software such as reliability, scalability, security, availability, etc.   Decisions It defines the rules on how the software is constructed. For example, a decision in a software development project states that each service in a microservice system can only have full-access permission to its own database. When a certain element cannot fully follow the rule then it is called variance in the system.   Principles The difference between decision and principle is that principle is more about providing guidelines for the dev

Why DevSecOps Is Important

DevSecOps stands for development, security, and operations. By its name, we can guess it is more like DevOps with the integration of security tools. The more formal definition, it is an approach to design and automate the integration of security at every phase of the software development lifecycle. This term became more popular when many cloud providers and code management tools started to use the name in many places on their platforms. As it integrates security tools in every phase of SDLC and automates the process, this approach can help developers catch vulnerabilities early. Besides it can help us to ensure that our project aligns with regulatory compliance from the beginning. This state can lead to cost-effective software delivery by reducing time to market and can help organizations build a security-aware culture. Security become a concern of more companies nowadays as the increasing of cyber incidents. Traditional DevOps may lack in a few aspects. First, in traditio

Kenshin's First Scar

Wandering who is the first person can make a scar on Kenshin's face. It is unexpected.

Create Effective Documentation for Software Project

As your software project grows, it may involve more contributors. If you build a platform that publishes APIs that can be consumed by the public, you may expect more users to use your platform. If you work on an internal project that involves many parties from several vendors, you may expect everyone can understand your project and collaborate well. In any scenario, effective documentation can help you achieve what you want. We should consider a  user-oriented design for our documentation which considers who will use our product and what goal our users pursue by reading the documentation. Sometimes, it can help us in developing the project itself by trying to see the project from a user perspective. These are types of common audiences and the information needed. Evaluators who examine whether the service or tool is useful. They need a high-level overview, a list of features, or expected benefits. New users who just learn the usage. Th

Terraform Cheat Sheet

Terraform has become more mature and can help us in many scenarios in provisioning infrastructure. These are a few scenarios that might be quite common for us in day-to-day jobs. Take values from another state as a data source This might be used when we already maintain a base state, and then a few child configurations need to access certain values from the base state. First, define the data source with attributes for accessing another state data "terraform_remote_state" "SOME_NAME" { backend = "local" config = { path = "/path/to/another/terraform.tfstate" } } Then, we can pass the value into any resources. For example, we output a value into another output value. output "public_ip" { value = data.terraform_remote_state.SOME_NAME.outputs.public_ip } Redeploy a resource This might be useful when we find an error in our resource that requires us to redeploy the resource. terraform apply -replace=&quo

Shape of My Heart

He deals the cards as a meditation And those he plays never suspect He doesn't play for the money he wins He doesn't play for respect He deals the cards to find the answer The sacred geometry of chance The hidden law of a probable outcome The numbers lead a dance I know that the spades are the swords of a soldier I know that the clubs are weapons of war I know that diamonds mean money for this art But that's not the shape of my heart He may play the jack of diamonds He may lay the queen of spades He may conceal a king in his hand While the memory of it fades I know that the spades are the swords of a soldier I know that the clubs are weapons of war I know that diamonds mean money for this art But that's not the shape of my heart That's not the shape The shape of my heart If I told her that I loved you You'd maybe think there's something wrong I'm not a man of too many faces The mask I wear is one But those who speak know nothing And find out to t

Managing S3-Compatible Storage Using CLI Tool

Most S3-compatible storage providers like UpCloud and DigitalOcean provide a dashboard for managing our storage. But, usually, we face some browser or web-related issues in certain conditions for example when we try to upload large amounts of files. There are some CLI tools out there that we can use for managing our storage like uploading files, migrating files to another bucket, etc. One of the popular CLI tools is S3cmd . For instance, I use an object storage service provided by UpCloud . For this past year, I migrated many of my services from AWS and DigitalOcean to UpCloud because of its cost and performance. I found that UpCloud actively develops new features or services and improves its infrastructure performance. To install S3cmd , we need to have Python and PIP in our machine. After that, we can run the following command to install S3cmd . pip install s3cmd Then, we can configure the tool by running the following command. Four fields are important in our ca

Invisible Closure Scope in Javascript

When we are maintaining variables in our Javascript code, we must already know about the scope that determines the visibility of variables. There are three types of scope which are block, function, and global scope. A variable defined inside a function is not visible (accessible) from outside the function. But, the variable is visible to any blocks or functions inside the function. When a function is created, it has access to variables in the parent scope and its own scope, and it is known as closure scope. For example, if we create a function (child) inside another function (parent), in the creation time the child function will also have access to variables declared in its parent. Another way to think of closure is that every function in JavaScript has a hidden property called "Scope", which contains a reference to the environment where the function was created. The environment consists of the local variables, parameters, and arguments that were available to the

Threat Vectors in Cybersecurity

In cybersecurity, a threat is the potential occurrence of an undesirable event that can eventually damage or disrupt the operational and functional activities of a company or organization. Some examples are an attacker stealing sensitive data, infecting a system with malware, and data tampering. In order to realize their intentions, threats need vectors. A threat vector is a medium through which an attacker gains access to a system by exploiting identified vulnerabilities. Some most common threat vectors used by adversaries are as follows. Direct/physical access : By having direct access to our computing devices, the attacker can perform many malicious activities like installing malicious programs, copying a large amount of data, modifying device configuration, and so on. Protection : We should implement strict access control and restriction. Removable media : Devices like USB flash drives, smartphones, or IoT devices may contain malicious programs

Utilise GraphQL and Apollo Client for Maintaining React State

One library that is quite popular to allow our application to interact with the GraphQL server is Apollo Client ( @apollo/client ). It has support for several popular client libraries including React. It also provides cache management functionality to improve the performance of the application when interacting with GraphQL. Rather than integrating another library to manage our application state, we can leverage what Apollo Client already has to maintain the state. The solution is achieved by creating a customised query to load the result from a local variable or storage. Then, the query is executed like other queries using useQuery() . The steps are as follows. Create a local query that will read data only from the cache. Create a cache that defines a procedure for the query to read data from local values. Call the query anytime we want to read the state value. For instance, we want to read the information of the current user that has successfully logged in

Deploying Infrastructures Using Terraform on UpCloud

Terraform is a tool to help us deploy infrastructures on any cloud provider such as AWS, GCP, DigitalOcean, and many more. Unlike Amazon CloudFormation which is specific only for AWS, Terraform supports many cloud providers found in Terraform's registry. It uses a domain-specific language built clearly for provisioning and configuring infrastructures named HCL or HashiCorp Configuration Language. Meanwhile, UpCloud is an alternative cloud provider for SMEs. It targets a quite similar segment to DigialOcean and Linode. It provides a variety of popular solutions in the cloud such as managed Redis database, S3-compatible storage, private network, load balancer, and so on. Even though its cost is a little bit higher than DigitaOcean or others, it provides quite complete features on each service like the features of the load balancer that we will use in this post. Moreover, it actively publishes new features like the managed OpenSearch database published rece

The Truth About Reiner and Bertolt

"As a warrior, no road left but the one that leads to the end."

Communicate Through RabbitMQ Using NodeJS and Fastify

If we have two or more services that need to talk to each other but it is allowed to be asynchronous, we can implement a queue system in our system using RabbitMQ. RabbitMQ server will maintain all queues and connections to all services connected to it. This post will utilize Fastify as a NodeJS framework to build our program. This framework is similar to Express but implements some unique features like a plugin concept and an improved request-respond handler. First, we need to create two plugins, one is for sending a message, another one is for consuming the sent message. At first, we will make it using a normal queue. There is one mechanism for how a queue works, it is like a queue in the real world. When there are five persons in a queue and three staff to handle the queue, one person is served only by one staff, there is no need for other staff to handle any person that has been served, and there is no need for a person to be handled repeatedly by other staffs. For example

Is Data Important

Today, many systems may generate huge amounts of data such as system logs, financial transactions, customer profiles, security incidents, and so on. It is encouraged by the advancement of some technologies like IoT, mobile devices, and cloud computing. There are also fields that specifically learn to manage and process a lot of data like data science and machine learning. A set of data can be processed to produce certain results like detecting anomalies, predicting the future, or describing the state of a system. To generate such a result, the typical phases are collecting data, data preparation, visualization, and data analysis or generating results. In collecting data, we have to take some considerations including the location where the data will be stored, the type of stored data, and the retrieval method or how other systems can consume the data. When we want to select a location, we should consider whether the storage is available in the cloud or on-premise infrastructure,

Manually Select Private Key for Git CLI

We may utilize different keys for different projects or accounts. When we pull data from a Git repository through an SSH connection, by default the Git tool will follow the default SSH configuration for selecting the key used which is located in ~/.ssh/id_rsa . We can also set a custom SSH configuration located in the  ~/.ssh/config  file that will be followed by the Git tool too as explained in my other post . For setting the private key locally or per session, there are other options. First, we can utilize an environment variable that will be read by the Git tool for selecting the correct SSH command which is GIT_SSH_COMMAND . The usage is as follows. GIT_SSH_COMMAND="ssh -i ~/.ssh/your_id_rsa -F /dev/null" git clone The -F /dev/null parameter is used for ignoring any available SSH configuration in the host. This method will apply the custom SSH command during the user session or it can be permanent too by setting it in the host envi

Managing Password in Unix Using Pass

If you are looking for simple password management in Unix, pass maybe the answer. It utilizes GPG to encrypt the stored passwords. It stores the encrypted passwords as text files in a tree of directories. Each directory can maintain a separate GPG key for encrypting the passwords stored inside it. How easy is it? The following command shows how we can store a password and set AWS/access-key-id as the variable name to access it in the future. pass insert AWS/access-key-id The previous command will automatically create a directory named AWS inside the  ~/.password-store  directory which is the default location of pass storage. It also creates a file named access-key-id.gpg inside the ~/.password-store/AWS directory. To access the value we can call the following command. pass AWS/access-key-id There are some steps we need to run for utilizing the tool. Install pass using package manager Create a GPG key pair record Initialize the pass storage with the spe

The First Time Kenshin Met Hiko

"Tell me your name" "Shinta" "Too soft for a swordsman, as of today you are Kenshin"

Essentials Ansible Modules

Ansible is a reliable configuration management tool. It is shipped with a lot of modules including those provided by the communities. Some modules are essential and come in very handy in everyday tasks. Ansible is pushed-based and works by generating a Python script that will be run on the target server. It means the target server is required to have Python which is also commonly shipped in any Linux distros. package The module is used to manage packages in the target host. It is like running apt , yum , or aptitude . The following snippet is an example of its usage to install the Nginx package using the package manager. tasks: - name: Install Nginx package: name: nginx state: present update_cache: True file It is used to manage files, symlinks, links, or folders on the target host. These are the two examples. tasks: - name: Create a directory file: path: "/home/luki/mydir" state: directory mode: 0750 - name: C

Installing VSCode Server Manually on Ubuntu

I've ever gotten stuck on updating the VSCode server on my remote server because of an unstable connection between my remote server and that host the updated server source codes. The download and update process failed over and over so I couldn't remotely access my remote files through VSCode. The solution is by downloading the server source codes through a host with a stable connection which in my case I downloaded from a cloud VPS server. Then I transfer the downloaded source codes as a compressed file to my remote server through SCP. Once the file had been on my remote sever, I extracted them and align the configuration. The more detailed steps are as follows. First, we should get the commit ID of our current VSCode application by clicking on the About option on the Help menu. The commit ID is a hexadecimal number like  92da9481c0904c6adfe372c12da3b7748d74bdcb . Then we can download the compressed server source codes as a single file from the host.

Creating Self-signed and CA Certificate using OpenSSL

A self-signed certificate is very useful for us when we are in a development or closed environment and require a secure communication channel between nodes in our system like implementing HTTPS for client-server communication. To make our self-signed certificate to be recognized by all nodes in the system, we should generate the CA certificate and distribute it to all nodes. This CA certificate is used to verify and determine the issuer of the self-signed certificate. It is like a stamp on a certificate that ensures the certificate is issued by the authority informed in the certificate itself. OpenSSL CLI tool will be used for this purpose. The following steps can be run to generate valid self-signed and CA certificates. Generate a private CA key Generate a public CA certificate Generate a private key for the target server Generate a CSR for the server Generate a public server certificate and sign it with the CA certificate Before we start the certificate generatio