The development of improved segmentation algorithms for more consistently accurate detection of retinal boundaries is a potentially useful solution to the limitations of existing optical coherence tomography (OCT) software. We modeled artifacts related to operator errors that may normally occur during OCT imaging and evaluated their influence on segmentation results using a novel segmentation algorithm. These artifacts included: defocusing, depolarization, decentration, and a combination of defocusing and depolarization. Mean relative reflectance and average thickness of the automatically extracted intraretinal layers was then measured. Our results show that defocusing and depolarization errors together have the greatest altering effect on all measurements and on segmentation accuracy. A marked decrease in mean relative reflectance and average thickness was observed due to depolarization artifact in all intraretinal layers, while defocus resulted in a less-marked decrease. Decentration resulted in a marked but not significant change in average thickness. Our study demonstrates that care must be taken for good-quality imaging when measurements of intraretinal layers using the novel algorithm are planned in future studies. An awareness of these pitfalls and their possible solutions is crucial for obtaining a better quantitative analysis of clinically relevant features of retinal pathology.