Bug #15552
Updated by Eregon (Benoit Daloze) about 5 years ago
Specifically, the image generated is incorrect. When modifying the footer of the benchmark like this: ```ruby alias printf_orig printf def printf *args $fp.printf(*args) end File.open("ao.ppm", "w") do |fp| $fp = fp printf("P6\n") printf("%d %d\n", IMAGE_WIDTH, IMAGE_HEIGHT) printf("255\n") Scene.new.render(IMAGE_WIDTH, IMAGE_HEIGHT, NSUBSAMPLES) end undef printf alias printf printf_orig ``` Here is the expected image: ![ao_ref](https://user-images.githubusercontent.com/168854/51442303-dd6f8680-1cdb-11e9-89ac-e88773a384c8.png) I get this image with MRI 2.6.0: ![ao_mri_2_6_0](https://user-images.githubusercontent.com/168854/51442292-c7fa5c80-1cdb-11e9-9146-2ec3f447d479.png) And interestingly, TruffleRuby 1.0.0-rc11 renders an image closer to the expected one: ![ao_tr_rc11](https://user-images.githubusercontent.com/168854/51442298-d5174b80-1cdb-11e9-97f6-0b1fc59fcddf.png) I guess this might be both an interpreter bug, and possibly also a bug in the benchmark code. source bug. I think every benchmark should have some validation, otherwise it's prone to measure something unexpected like this.