Some Discussion of HPM
PP211 - 225
Some Definitions. Normalization is a formal process for deciding which attributes should be grouped together within a table.
The reality is, deciding which attributes should be in which tables is not always an easy decision.
Another definition HPM gives for normalization is the process of successively reducing tables with anomalies to produce smaller, well structured tables. I suspect I understand this definition because I already have a lot of experience with normalizing tables.
The definition that I am most used to working with is given on page 220 of HPM. In my mind it is the most practical way to think of what normalizing tables means. Your tables are normalized if the dependent attributes associated with each primary key in each table do not overlap. Essentially this says,
My experience is that you want to be constantly thinking about what attributes go in what tables while trying to ensure your tables are normalized. We will get into what this means and some major reasons why we want to do it with this following bulleted list.
HPM takes a generally theoretical approach to normalization. This has its validity. But my experience is that you are better off understanding what normalization means and its implications and working intelligently to create normalized tables rather than using some sort of less insightful algorithmic approach as developed in HPM. But I will survey HPM so that you can see how this can all be done entirely systematically. But as often happens with systems, they work to produce the desired effect, but they don't really improve insight into the overall system. I much prefer developing my normalized tables based on insight rather than abstract systematic procedures.
Steps in Normalization. This next process essentially guarantees that you can construct tables that are normalized with respect to each other.
HPM really only focuses on first through third normal forms. These are the most typically implemented.