Skip to main content

Is Data Important

Today, many systems may generate huge amounts of data such as system logs, financial transactions, customer profiles, security incidents, and so on. It is encouraged by the advancement of some technologies like IoT, mobile devices, and cloud computing. There are also fields that specifically learn to manage and process a lot of data like data science and machine learning.

A set of data can be processed to produce certain results like detecting anomalies, predicting the future, or describing the state of a system. To generate such a result, the typical phases are collecting data, data preparation, visualization, and data analysis or generating results.

In collecting data, we have to take some considerations including the location where the data will be stored, the type of stored data, and the retrieval method or how other systems can consume the data. When we want to select a location, we should consider whether the storage is available in the cloud or on-premise infrastructure, whether it will be deployed in a single instance or in a cluster, or whether it uses a document or relational database. After we decide on the location, we should think about how the data is stored, what is the form, and the data type. Data pipelines may become a topic in this step to tackle issues in scalability, data source integration, and automating the collecting process.

In data preparation, we may execute several tasks including tidying up data, removing duplication, correcting data types, and handling missing values. When we find out some missing values in a record, first we need to think about the possible cause of it, then we can choose between inputting appropriate values or completely dropping the record. The appropriate value can be the mean, median, or maximum/minimum value depending on the case.

Then, visualization is needed so that the prepared data can be easily understood by representing it in a suitable format or helping analysts gain insight or describing the condition of something. Things to be considered in preparing visualization like accessibility and readability.

The final phase is result generation which can be in various forms depending on the initial intention. We may perform simple analytical procedures or advance machine learning techniques to generate complex results such as to make predictions or object clustering. We may run an A/B test when we want to understand the impact of changes in certain aspects of a system. We may run a supervised machine-learning technique to make a prediction based on predefined labels and available features. When we are not sure what information can be retrieved from the set of data, an unsupervised machine learning technique may be performed to provide clustering of data so that we can be helped in making the conclusion.

Based on the phases explained above, there are several roles that focus on a specific phase in data processing. A data engineer focuses on creating a data pipeline and preparing data so that data can be stored and consumed by any parties in the process. A data analyst focus on creating the visualization and data preparation for describing the retrieved information. An analyst may utilize tools such as Power BI or spreadsheets. In gaining insight or making predictions, a data scientist comes in. Programming skills and knowledge of statistics are necessary in this case. When it comes to generating prediction, reasoning, or classification, a machine learning scientist is required.


Comments

Popular posts from this blog

Increase of Malicious Activities and Implementation of reCaptcha

In recent time, I've seen the increase of malicious activities such as login attempts or phishing emails to some accounts I manage. Let me list some of them and the actions taken. SSH Access Attempts This happened on a server that host a Gitlab server. Because of this case, I started to limit the incoming traffic to the server using internal and cloud firewall provided by the cloud provider. I limit the exposed ports, connected network interfaces, and allowed protocols. Phishing Attempts This typically happened through email and messaging platform such as Whatsapp and Facebook Page messaging. The malicious actors tried to share a suspicious link lured as invoice, support ticket, or something else. Malicious links shared Spammy Bot The actors leverage one of public endpoint on my website to send emails. Actually, the emails won't be forwarded anywhere except to my own email so this just full my inbox. This bot is quite active, but I'm still not sure what...

Configuring Swap Memory on Ubuntu Using Ansible

If we maintain a Linux machine with a low memory capacity while we are required to run an application with high memory consumption, enabling swap memory is an option. Ansible can be utilized as a helper tool to automate the creation of swap memory. A swap file can be allocated in the available storage of the machine. The swap file then can be assigned as a swap memory. Firstly, we should prepare the inventory file. The following snippet is an example, you must provide your own configuration. [server] 192.168.1.2 [server:vars] ansible_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa Secondly, we need to prepare the task file that contains not only the tasks but also some variables and connection information. For instance, we set /swapfile  as the name of our swap file. We also set the swap memory size to 2GB and the swappiness level to 60. - hosts: server become: true vars: swap_vars: size: 2G swappiness: 60 For simplicity, we only check the...

Deliver SaaS According Twelve-Factor App

If you haven't heard of  the twelve-factor app , it gives us a recommendation or a methodology for developing SaaS or web apps structured into twelve items. The recommendation has some connections with microservice architecture and cloud-native environments which become more popular today. We can learn the details on its website . In this post, we will do a quick review of the twelve points. One Codebase Multiple Deployment We should maintain only one codebase for our application even though the application may be deployed into multiple environments like development, staging, and production. Having multiple codebases will lead to any kinds of complicated issues. Explicitly State Dependencies All the dependencies for running our application should be stated in the project itself. Many programming languages have a kind of file that maintains a list of the dependencies like package.json in Node.js. We should also be aware of the dependencies related to the pla...

Kenshin VS The Assassin

It is an assassin versus assassin.

Handling PDF Generation in Web Service

If we are building a website that requires a PDF generation feature, there are several options for implementing it based on the use cases or user requirements. First, we can generate the PDF on the client side using any available client library. It is suitable if the use case is to print out some data that is already available inside certain website components, and we want to maintain the styles of the components in the document. Second, we can do it fully in the back-end using any library available, such as PDF-lib, jsPDF, and so on. This approach is suitable if we want to keep the data processing or any related business functions in the back-end server. This second approach might have disadvantages, such as the difficulty of maintaining the design assets and styles which are already on our website. Third, it is using a hybrid approach, where certain processes are handled on the client side, and some are handled on the back-end. In this post, I want to discuss more about the...

Free Cloud Services from UpCloud

Although I typically deploy my development environment or experimental services on UpCloud , I do not always stay updated on its announcements. Recently, I discovered that UpCloud has introduced a new plan called the Essentials plan, which enables certain cloud services to be deployed at no cost. The complimentary services are generally associated with network components or serve as the foundation for other cloud services. This feature is particularly useful when retaining foundational services, such as a load balancer, is necessary, while tearing down all services and reconfiguring the DNS and other application settings each time we temporarily clean up infrastructure to reduce costs is undesirable.  When reviewing the service specifications of the cloud services in the Essentials plan, they appear to be very similar to those in the Development plan. The difference in service levels is unclear, but it could be related to hardware or resource allocation. For instance, the loa...