This data is correct from the database perspective, but it is inconsistent: the remote server now has two addresses, and it is not clear which one is correct.
This is where data normalization comes in handy. Following a few simple rules, you can split the information into different tables and thus eliminate the possibility of data corruption or irregularities. There are three basic data normalization forms, each defining the rules for structuring the data. In addition to these forms, there are a number of higher-degree normalization forms developed, but in most cases they pose only an academic interest.
I'm going to start with the First Normal Form, which defines two important properties for the table structure: rows must be unique, and there should be no repeating groups within columns.
The first rule is pretty obvious and means that there must be a way of uniquely identifying each row. The unique key can be either one column or a combination of columns.
The second rule means that I cannot define multiple columns that carry what is logically the same information. For example, if I wanted to have multiple checks for each server and added these checks as additional columns to store that information, it would violate the second rule, as in this example:
hostname address sensorl optionsl sensor2
Was this article helpful?