DM Stat-1 Articles
Link to Home

Link to Articles

Link to Consulting

Link to Seminar

Link to Stat-Chat

Link to Software

Link to Clients

A Comparison of Two Popular Machine Learning Methods: Common Pitfalls
Bruce Ratner, Ph.D.

Machine learning, a computer-based approach for solving problems, has recently been the subject of comparative studies because it represents a potentially viable alternative to traditional statistical methodology. The purpose of this article is not to debate the comparative merits of either type of model; suffice it to say that much about machine learning has value. Given their expertise with quantitative methods, analysts - whether statisticians or computer scientists - have been not been especially mindful of how they were comparing methods, and thus neglected the essential trinity of contingencies:

• proper implementation of the method
• the method’s explicit measure of performance,
• the data

However, by introducing a new approach such as machine learning, traditionalists should be cautioned that without a strict adherence to proper comparison techniques, their findings would be flawed.

In providing a technical review of the two most popular machine learning methods - genetic programming and neural networks – I have utilized the “holy” trinity of contingencies, demonstrating for analysts how to conduct their own balanced evaluations of these popular methods.

I offer a few definitions of machine learning, after which I provide a motivational theme of machine learning. Then, I provide a technical review of genetic programming and neural networks.

For more information about this article, call Bruce Ratner at 516.791.3544,
1 800 DM STAT-1, or e-mail at