Show simple item record

dc.contributor.authorKubat, Jamie
dc.description.abstractRecently, the idea of multiple comparisons has been criticized because of its lack of power in datasets with a large number of treatments. Many family-wise error corrections are far too restrictive when large quantities of comparisons are being made. At the other extreme, a test like the least significant difference does not control the family-wise error rate, and therefore is not restrictive enough to identify true differences. A solution lies in multiple testing. The false discovery rate (FDR) uses a simple algorithm and can be applied to datasets with many treatments. The current research compares the FDR method to Dunnett's test using agronomic data from a study with 196 varieties of dry beans. Simulated data is used to assess type I error and power of the tests. In general, the FDR method provides a higher power than Dunnett's test while maintaining control of the type I error rate.en_US
dc.publisherNorth Dakota State Universityen_US
dc.rightsNDSU Policy 190.6.2
dc.titleComparing Dunnett's Test with the False Discovery Rate Method: A Simulation Studyen_US
dc.typeThesisen_US
dc.date.accessioned2017-12-12T20:11:15Z
dc.date.available2017-12-12T20:11:15Z
dc.date.issued2013
dc.identifier.urihttps://hdl.handle.net/10365/27025
dc.rights.urihttps://www.ndsu.edu/fileadmin/policy/190.pdf
ndsu.degreeMaster of Science (MS)en_US
ndsu.collegeScience and Mathematicsen_US
ndsu.departmentStatisticsen_US
ndsu.programStatisticsen_US
ndsu.advisorDoetkott, Curt
ndsu.advisorMagel, Rhonda


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record