Skip to main content

Advantages of Using Protocol Buffer

A protocol buffer is a mechanism to share objects between machines which is language agnostic and has a target to reduce the payload size. We are already common with JSON which is used by most RESTful APIs to send/receive objects to/from any kind of client. JSON is already convenient and supported by many platforms, but, why we should know about the protocol buffer.

Besides the optimization of payload encoding, protocol buffer which is also called protobuf introduces schema definition that should be maintained by the machines to encode or decode the objects delivered. The main processes for delivering the objects are called serialization and deserialization. Serialization is the process of transforming an object instance in an application into an optimized binary payload. Deserialization is the process of decoding the binary data into the desired object.

Let's take a look at the following table that shows a comparison of XML, JSON, and protobuf.


XML JSON protobuf
Readability Normal High Low
Strictness Low Normal High
Size Efficiency Low Normal High

JSON is good to be debug and read by humans. If our object is mainly intended to be processed by machines, readability is not the focus.

JSON only support three basic types which are boolean, number, and string. Meanwhile, protobuf defines more data types so that any platforms or programming languages can decode the object into the desired type automatically. Besides, the strictness of the data type can be maintained among machines because protobuf requires all machines or applications to maintain the data or message schema with a set of rules and options.

Unlike XML or JSON which maintain the object schema in the payload explicitly and deliver the object as plain text, protobuf encodes the object into optimized binary with certain tags to describe the object structure. While the complete object schema is maintained in every machine. This leads to the payload size reduction.

There is also a CLI tool called protoc that can help us perform the encoding and decoding of data. Besides, this tool can also generate the class to instantiate the object defined in the protobuf schema for various programming languages.

Another benefit of protobuf, it standardises a mechanism so that it can handle both backward and forward compatibility. Backward compatibility means that a machine can still process the object delivered by another machine with an earlier version. Forward compatibility means it can handle the object delivered by a machine with a later version. This can be achieved inheritely by the concept of default value and reserved field embedded in protobuf.

As a reminder, protobuf doesn't handle the communication process. This is handled by another framework like gRPC. However, protobuf can define a service block in the schema to describe how a service will receive the request message and send the response message.


Comments

Popular posts from this blog

Configuring Swap Memory on Ubuntu Using Ansible

If we maintain a Linux machine with a low memory capacity while we are required to run an application with high memory consumption, enabling swap memory is an option. Ansible can be utilized as a helper tool to automate the creation of swap memory. A swap file can be allocated in the available storage of the machine. The swap file then can be assigned as a swap memory. Firstly, we should prepare the inventory file. The following snippet is an example, you must provide your own configuration. [server] 192.168.1.2 [server:vars] ansible_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa Secondly, we need to prepare the task file that contains not only the tasks but also some variables and connection information. For instance, we set /swapfile  as the name of our swap file. We also set the swap memory size to 2GB and the swappiness level to 60. - hosts: server become: true vars: swap_vars: size: 2G swappiness: 60 For simplicity, we only check the exi

Master Slave Replication to Automatically Backup Your MySQL Database

We can make backup for some databases by periodically running a kind of dump query, like mysqldump in MySQL. That's the simplest method but it can drain our server resources and it's not suitable for large databases. MySQL comes up with master-slave features that allow you to replicate your database to another location (slave). This mechanism enables MySQL to generate a log file which records any action performed to the database. Then, that action will be run in slave database too. For example, we have two database servers with IP address 192.168.0.1 (Master) and 192.168.0.2 (Slave). 1) Configure my.cnf in master server # Master Settings # locate where the changes record will be stored log-bin = /var/log/mysql/mysql-bin.log # set unique ID for master database in master-slave network (up to you) server-id = 111 innodb_flush_log_at_trx_commit = 1 sync_binlog = 1 # select database which will be replicated # by default system will log all databases binlog-do-db =

Rangkaian Sensor Infrared dengan Photo Dioda

Keunggulan photodioda dibandingkan LDR adalah photodioda lebih tidak rentan terhadap noise karena hanya menerima sinar infrared, sedangkan LDR menerima seluruh cahaya yang ada termasuk infrared. Rangkaian yang akan kita gunakan adalah seperti gambar di bawah ini. Pada saat intensitas Infrared yang diterima Photodiode besar maka tahanan Photodiode menjadi kecil, sedangkan jika intensitas Infrared yang diterima Photodiode kecil maka tahanan yang dimiliki photodiode besar. Jika  tahanan photodiode kecil  maka tegangan  V- akan kecil . Misal tahanan photodiode mengecil menjadi 10kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 10 / (10+10) x Vcc V- = (1/2) x 5 Volt V- = 2.5 Volt Sedangkan jika  tahanan photodiode besar  maka tegangan  V- akan besar  (mendekati nilai Vcc). Misal tahanan photodiode menjadi 150kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 150 / (150+10) x Vcc V- = (150/160) x 5

Installing VSCode Server Manually on Ubuntu

I've ever gotten stuck on updating the VSCode server on my remote server because of an unstable connection between my remote server and visualstudio.com that host the updated server source codes. The download and update process failed over and over so I couldn't remotely access my remote files through VSCode. The solution is by downloading the server source codes through a host with a stable connection which in my case I downloaded from a cloud VPS server. Then I transfer the downloaded source codes as a compressed file to my remote server through SCP. Once the file had been on my remote sever, I extracted them and align the configuration. The more detailed steps are as follows. First, we should get the commit ID of our current VSCode application by clicking on the About option on the Help menu. The commit ID is a hexadecimal number like  92da9481c0904c6adfe372c12da3b7748d74bdcb . Then we can download the compressed server source codes as a single file from the host.

Resize VirtualBox LVM Storage

VirtualBox is a free solution to host virtual machines on your computer. It provides configuration options for many components on our machine such as memory, storage, networking, etc. It also allows us to resize our machine storage after its operating system is installed. LVM is a volume manager in a Linux platform that helps us to allocate partitions in the system and configure the storage size that will be utilized for a specific volume group. There are some points to be noticed when we work with LVM on VirtualBox to resize our storage. These are some steps that need to be performed. 1. Stop your machine before resizing the storage. 2. Set new storage size using GUI by selecting " File > Virtual Media Manager > Properties " then find the desired virtual hard disk name that will be resized. OR , by running a CLI program located in " Program Files\Oracle\VirtualBox\VBoxManage.exe ".  cd "/c/Program Files/Oracle/VirtualBox" ./VBoxManage.exe list

Configure Gitlab SMTP Setting

Gitlab CE or EE is shipped with the capability to send messages through SMTP service as the basic feature to send notifications or updates to the users. The configuration parameters are available in /etc/gitlab/gitlab.rb . Each SMTP service provider has a different configuration, therefore the Gitlab configuration parameters should be adjusted according to the requirements. Some examples have been provided by Gitlab here . This is an example if you use the Zoho service. gitlab_rails['smtp_enable'] = true gitlab_rails['smtp_address'] = "smtp.zoho.com" gitlab_rails['smtp_port'] = 587 gitlab_rails['smtp_authentication'] = "plain" gitlab_rails['smtp_enable_starttls_auto'] = true gitlab_rails['smtp_user_name'] = "gitlab@mydomain.com" gitlab_rails['smtp_password'] = "mypassword" gitlab_rails['smtp_domain'] = "smtp.zoho.com" This is another example of using Amazon SES w