Core patches need to be tested!

It has come to our, SimpleTest module developers, attention that the tests are breaking at a rapid rate. Most of the tests that currently have issues passed when they were originally committed to the core. The list of tests that passed was used during the cleanup stages, but hasn't been changed since SimpleTest was added to core and stands as a record.

I previously posted "Core patches need to update tests as well" in preparation for SimpleTest becoming part of core. Sadly it doesn't appear it was heeded.

After a long process of trial and error rolling back commits, webchick found the patch that broke the translation test, and possibly others. This would have saved everyone involved a lot of time had those who reviewed and wrote the patch simply run the tests on the patch, or better yet if were finished and implemented on so that it would happen automatically.

Regardless it is rather annoying to spend several weeks cleaning up the tests to see them broken by un-tested commits. Before marking a patch as "ready to be committed" ensure that someone has run the suite of tests on it. If that hasn't been done I would hope that the patch isn't committed.


I think this one might be the exception which proves the rule. The original patch to move admin/content/types to admin/build/types went in on April 2nd, and translation.test was reviewed on April 10th ( Core was actually broken on April 10th when the test was reviewed, the test was passing against a broken core, then started failing when core was fixed.

What should have happened in this case was the April 2nd patch be accompanied by an updated test against the simpletest project - since this was still 10 days before tests were committed to core.

None of this excuses the fact that no-one spotted the tests breaking until Jimmy went through everything. But in this case both core and the test were wrong, and only one got fixed, rather than just reckless breaking of the test itself. I agree that seems to be happening elsewhere.

A big issue at the moment is that running all tests gives inconsistent results, there's a postponed issue for that here: - since chx reckons it's a symptom of static variables in core: - I guess this is also a blocker to having the running the full test suite and reporting back - then there'd be no escape from breaking tests!

Just because running all the tests gives inconsistent results doesn't mean each test can't be run individually. Especially now that there is a script to run the tests from the command line you can write a very simple script to run them one at a time.

If nothing better at least run the test for the module you are changing.

That's true. But I didn't know this when I spent a few hours trying to work out why was playing havoc with 'run all tests'. Nor will the vast majority of people who are likely to hit that button when testing any patch that touches more than one module (or for that matter a module as central to Drupal as node.module). In my case this means I'm testing stuff more now (and trying to help out with broken tests where I can), but webchick's right that unless we can get everything running smoothly again fast, people will get put off (and submit more test breaking patches because they haven't bothered to check).