You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Yes! I don't think we should integrate this script in the CI build, because I doubt that we can make the adapters pass for all versions. I made it for having an overview and maybe investing for some more recent versions. For example for jasmine 2.3.0, our adapters fail, which is quite strange.
Also not all of them are not compatible with our adapters, some will fail because they do not have some functionalities to cover the test fixtures, for example:
I have seen that a version of QUnit was failing because it didn't have the skip method
other Qunit versions fail because they do not have the capability of nesting suites
For example, for QUnit only the versions up to 1.20.0 will pass, because in the aforementioned version were the nested modules implemented, from the QUnit history file:
Core: Implement Nested modules
Should we add some extra fixtures ? Like a fixture without nested modules and to take care of which version to run against which test fixture ? Or to run this script against the simplest fixture possible (only a passed test, one failed and a suite, something like this) ?
I'm still interested in running this all the time. How about a "whitelist" (or "blacklist"?) of versions that we know to fail? We'd assume that newer releases will pass, so if newer releases regress, we'd notice the next time we run this test.
If its too slow to run for every build, we could at least run it locally every now and then and know that the output is reliable (already ignoring known issues).
Let's start simple, e.g. "blacklist" all QUnit versions below 1.20.0, since we know those don't work as expected. We could break that down when we have a need for it, e.g. someone reports an issue with an older QUnit version with js-reporters.
How bad is it though? Its okay for a CI build to take a few minutes. We can update .travis.yml to run this separate script, so that we don't have to make it part of npm test.
And this "blacklist" should we write it down, so that we can know when a new version will appear, or which version were functioning and which not ?
I'd list known failing versions (where we want to accept/ignore those failures). And assume new versions are going to work fine. If new versions fail, we want to know about that.
It takes some good minutes (13 mins), but is better to ensure a proper developing 😃
I integrated the script into 3 tests, that are testing the failing (not working) versions to be constant.
Also I don't know why the build is failing on node stable. Any idea?
Some versions have caught my attention:
Jasmine 2.3.0
Mocha 2.1.0, 2.5.0
I investigated for Jasmine 2.3.0, the adapter is working, Mocha is receiving SIGINT and aborts the runner. I have done some debug, but I still didn't find the source.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.