w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Can you run two test cases simultaneously in a Test Suite in Microsoft Test Manager 2010?
Yes, you can with modified TestSettings file. http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx

Categories : Testing

How do I Dynamically create a Test Suite in JUnit 4 when the test files are not extending to TestCase?
Perhaps have a look at @Andrejs answer over here as he mentions dynamically adding JUnit 4 testcases to a testsuite: @RunWith(AllTests.class) public class SomeTests { public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTest(new JUnit4TestAdapter(Test1.class)); suite.addTest(new JUnit4TestAdapter(Test2.class)); return suite; } }

Categories : Java

create test suite with test cases independent from each other?
You should declare your webdriver static, then use @BeforeClass to instantiate it and log-on to your app. You should do this in each test class, independently of the others. So, each test case has the logon as a default given. You can then have each test method (@Test) assume that you're logged on. This probably won't really give you the separation you want though. If the server is refusing to log you on because someone has changed your password then all of your tests are going to fail. This isn't your biggest problem though. Your biggest problem is that you're pouring concrete all over your UI, metaphorically at least. Your developers may choose, at some point, to change the submit buttons to anchors, to skin each one using jquery-ui's button, and then use javascript to invoke the form

Categories : Misc

running a test suite and test case using groovy?
You haven't wrote the code which runs test case execution. Try to use this script: //get test case from other project or from the same one project = testRunner.getTestCase().getTestSuite().getProject().getWorkspace().getProjectByName(project_name) testSuite = project.getTestSuiteByName(suite_name); testCase = testSuite.getTestCaseByName(testcase_name); //set properties if you need testRunner.testCase.setPropertyValue(property_name, property_value); testRunner.testCase.setPropertyValue(another_property_name, another_property_value); // run test case runner = testCase.run(new com.eviware.soapui.support.types.StringToObjectMap(), false); Also there is a method to call soap ui tests using crontab. Running functional tests from the command-line is straightforward using the included in soa

Categories : Groovy

Test for click event using Jasmine test suite
What I generally tend to do is create a stub and assign the event to the stub. Then trigger the click event and check if it was called describe('view interactions', function () { beforeEach(function () { this.clickEventStub = sinon.stub(this, 'clickEvent'); }); afterEach(function () { this.clickEvent.restore(); }); describe('when item is clicked', function () { it('event is fired', function () { this.elem.trigger('click'); expect(this.clickEventStub).toHaveBeenCalled(); }); }); });

Categories : Javascript

Gradle test parameter test suite
Gradle documentation lists some ways to run specific test using system property: http://www.gradle.org/docs/current/userguide/userguide_single.html#sec:java_test . To run multiple correlated tests, you can try test group (both TestNG and Gradle supports): http://testng.org/doc/documentation-main.html#test-groups . If you insist to use your custom closure, you can always use project property. In build.gradle: test { useTestNG() { suites 'src/test/resources/testng-' + project.ext.input_parameter_as_string +'-Test.xml' useDefaultListeners = true } and in command line: gradle test -Pinput_parameter_as_string=testFoobar.

Categories : Gradle

perl test suite for API
I made it clear in my documentation that my module was useless without an API key, and used skip:{} constructs of Test::More to skip all the tests if the key was not present. You can choose too bail_out instead of skip. Just insure that your docs explain how to communicate the API key to the module.

Categories : Perl

How to run python test suite?
Cassandra tests are in the test directory. If you run ant -p you'll see there are different options for running the tests: long-test Execute functional tests pbs-test Tests PBS predictor test Execute unit tests test-clientutil-jar Test clientutil jar test-compression Execute unit tests with sstable compression enabled To run any of these you just need to do ant <taskname>.

Categories : Python

Designing a CRUD test suite
We have the same issue. We've taken two paths. In one style of test, we use the setup and teardown as you suggest to create the data (users, tickets, whatever) that the test needs. In the other style, we use pre-existing test data in the database. So, for example, if the test is AdminShouldBeAbleToCreateUser, we don't do either of those, because that's the test itself. But if the test is ExistingUserShouldBeAbleToCreateTicket, we use a pre-defined user in the test data, and if the test is UserShouldBeAbleToDeleteOwnTicket, we use a pre-defined user and create the ticket in the setup.

Categories : Testing

Boost test - every suite in other file
I have a similar setup to what you want (see this Q&A). If you want a CMake solution, look there. Otherwise, simply split your test cases over several files and compile and link each of them separately with the options -DBOOST_TEST_MAIN -DBOOST_TEST_DYN_LINK Note: it's generally preferred to put the macros as compiler/linker options rather than inside your source files. With several test sources and a CMake build solution, you can then call ctest to run all tests executables. If you want one test executable, them compile each of the test separately, and link them together into one executable. Then you can run this executable and it will run all tests. Note however that it is a lot more difficult to run only a selection of your tests this way.

Categories : C++

Jenkins/Maven job - running the test suite twice?
Looks like at some stage the tests are being run and we have the chance to stop them from running again, by configuring skip to true <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> </configuration> </execution>

Categories : Maven

WebGL comformance test suite different for different browsers
Because WebGL conformance test got updated. And You had luck to land in the middle of this. :D Did You tried to win lottery recently? Answer: Rerun those tests. PS test suite is versioned, Chrome was running on 1.0.1 (most probably) while IE on 1.0.3...

Categories : Internet Explorer

Junit4: how to create test suite dynamically
The cpsuite project uses a custom annotation to determine the tests to run at runtime based on patterns. This isn't what you want, but it is a good start. You can take the source code for it and replace the "find tests" part with reading from your property file. Note that the annotation/code is different for different versions of JUnit 4.X. If you are still on JUnit 4.4 or below, you'll need the older one.

Categories : Java

Dynamic JSON Checking for API Test Suite
After a bit of research and trial and error, this is what I came up with. It probably isn't perfect, but for lack of any other suggestions it is what I am going with: checkJSON: (response, expected) -> if /{{2}(STRING|NOT_NULL|ARRAY|OBJECT|NUMBER|IGNORE)}{2}/.test(expected) switch expected when '{{STRING}}' console.log 'testing string!' unless typeof(response) is 'string' console.log 'TYPE OF WAS SUPPOSED TO BE STRING: '+ typeof(response) @failure = true return when '{{NOT_NULL}}' console.log 'testing not null!' unless response console.log 'TYPE OF WAS SUPPOSED TO BE NOT_NULL: '+ typeof(response)

Categories : Json

Python - organising code and test suite
This depends on which test runner you want to use. pytest I recently learned to like pytest. It has a section about how to organize the code. If you can not import your main into the code then you can use the tricks below. unittest When I use unittest I do it like this: with import main Problem App main.py Tests test_main.py test_main.py import sys import os import unittest sys.path.append(os.path.join(os.path.dirname(__file__), 'App')) import main # do the tests if __name__ == '__main__': unittest.run() or with import App.main Problem App __init__.py main.py Tests test.py test_main.py test.py import sys import os import unittest sys.path.append(os.path.dirname(__file__)) test_main.py from test import

Categories : Python

contents not being added to the Fitnesse test/suite
After lots of efforts I found that that documentation is wrong for Rest on fitness.org the parameter in the request is not 'content' but 'pageContent'...

Categories : Rest

Error linking test-suite to library
Cabal is conflicted over two locations for the "Lexical.Token" module. Confusingly, they're both the same file. It's resolving mylib-0.0.0.1 from your build-depends to the "locally installed and registered" version of mylib. It's resolving Lexical.Token in the source as an other-modules entry, something that should be exposed through your test suite. Fix it be removing Lexical.Token from other-modules, I imagine. Your test suite should not share code with your tested code, but instead import all the modules as if your tested code were an external library.

Categories : Haskell

Having troubles executing HP ALM test suite remotely
I figured out what the issue was. It would be nice if there was more documentation available on this library. You need to set a few TSScheduler properties before the execution can commence. So this line: TSScheduler scheduler = testset.StartExecution("<RemoteServerName>"); // Contain server name unless local // scheduler.RunAllLocally = true; // Included when ran local scheduler.Run(); Became this: TSScheduler scheduler = testset.StartExecution(""); // Contain server name unless local scheduler.TdHostName = "<test_runner_name>"; scheduler.LogEnabled = true; scheduler.Run(testset.ID); Also would be a good idea if you confirm that you have all of the necessary ALM add-ins installed on your machine. Specifically the "HP Quality Center Connectivity" and "HP Quality Center

Categories : C#

test suite inside spring context
I have tried you code, the test suite are running with spring context loaded. Can you explain in more detail what the problem is? here is the code: @RunWith(Suite.class) @SuiteClasses({ Test1.class, Test2.class }) public class SuiteTest { } @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:context.xml" }) @Transactional public class Test1 {} @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:context.xml" }) @Transactional public class Test2 {} If you want Suite class to have its own application context, try this: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:context.xml" }) @Transactional public class SuiteTest { @Test public void run() { JUnitCore.runClass

Categories : Spring

java null pointer exception in test suite
WebDriver driver; is not initiated. What do you expect? The exception is absolutly correct. You can try the following code to avoid the exception: driver= new WebDriver(); driver.findElement(By.id("tabs"));

Categories : Java

How to use Angular Scenario Runner to run a subset of the test suite?
Yes, you can run a subset of tests. Use iit instead of it on the tests you would like to run and the others will be skipped. Example: describe('TestCtrl', function() { var $scope; iit('should have scope', function() { expect($scope).toBeDefined(); }); it('should have scope', function() { expect($scope).toBeDefined(); }); it('should have scope', function() { expect($scope).toBeDefined(); }); }); This will cause only the first test to be run and the others to be skipped. As soon as Jasmine detects a test with iit, it will skip the all tests with it. This is very handy if you need to test only one or two tests you are working on when you have defined a whole suite of tests. This also works when you replace describe with ddescrib

Categories : Angularjs

Where can I find an authoritative test suite for the HTML5 standard?
The official web platform test suite is on GitHub. Be warned that it's not yet finished, one of the requirements for HTML5 moving from Proposed Recommendation to Recommendation is to complete the test suite so that at least two interoperable implementations can be identified. Details of how to submit new tests are available on the wiki.

Categories : HTML

Fatal Error when running a JUnit test suite
This looks like an error from your sax parser (or whichever XML parser you're using). To locate the error, try adding an ErrorHandler to xml handling. For instance, for a DocumentBuilder, you can can call setErrorHandler(): DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = dbf.newDocumentBuilder(); builder.setErrorHandler(new ErrorHandler() { @Override public void warning(SAXParseException exception) throws SAXException { System.err.println("warning: caught exception"); exception.printStackTrace(System.err); } @Override public void fatalError(SAXParseException exception) throws SAXException { System.err.println("fatalError: caught exception"); exception.printStackTrace(System.err); } @Override public void error(

Categories : Java

Running a Test Suite of the same class but with different initial conditions
What about an alternative solution? 1.Use template pattern to extract an abstract test class, and make the initial condition preparement an abstract method. 2.Each test case extends the template and override the implentation of initial condition preparement. 3.Group them all in a test suite.

Categories : Java

How this asynchronous test suite built in JavaScript will terminate?
[…] resume in turn calls runTest() with a setTimeout(). Yes, but then runTest() doesn't do anything (since queue.length will be 0, which is falsy): no new test is run, no further calls to resume(), nothing.

Categories : Javascript

Why multiple chromedriver instance keeps on running after completion of a test suite
It's not just in case of chrome driver. It is the same with firefox driver. If you forgot to use server.stop() or your program interrupted in between, you can use this port to start your server from next time import org.browsermob.proxy.ProxyServer; Import the above. String sePortNumber = System.getProperty("WEBDRIVER_SERVER_PORT_NUMBER"); if (sePortNumber == null) { sePortNumber="0"; } ProxyServer server = new ProxyServer(Integer.parseInt(sePortNumber)); server.start(); You will get a random unused port every time.

Categories : Java

What is the universally accepted way to provide inputs to JUnit Test Suite
One of the core concepts about test driven development is that you run all test cases, all of the time in an automated way. Having a user use excel to choose test cases and enter data breaks this model. You could read from a file to drive your test cases, but perhaps your tests need to be redefined to be data independent. Also all of them should run every time you run JUnit. Just yesterday I used random data to perform a test...here is an example: @Test public void testIntGetter() { int value = random.getNextInt(); MyObj obj = new MyObj(value); assertEquals(value,obj.getMyInt()); } While this is an overly simple example it does test the functionality of the class while being data independent. Once you decide to break the test driven development/JUnit model, then your ques

Categories : Java

What tools exist for managing a large suite of test programs?
We rolled our own, we have a whole test infrastructure. It manages the tests, has a number of built in features for allowing the tests to log results, the logs are managed by the test infrastructure to go into a searchable database for all kinds of report generation. Each test has a class/structure that has information about the test, name of test, author, and a variety of other tags. When running a test suite you can run everything or run everything with a certain tag. So if you want to only test SRAM you can easily run only tests tagged sram. Our tests are all considered either pass or fail. pass/fail criteria is determined by the author of the individual test, but the infrastructure wants to see either pass or fail. You need to define what your possible results are, as simple as

Categories : Testing

functional testing for vagrant puppet provisioning (or running a CLI test suite)
Maybe serverspec would suite your needs. In particular, with the command resource type, which you can read about here, you should be able to do exactly what you've described. Note that in addition you can use it to test other resources like processes or opened ports without having to deal with ad-hoc command line scripts.

Categories : Shell

verify invocation n times, depending on whether test run in isolation or as part of suite
With Mockito I can simply use the reset method. Specifically designed for container based injection of mocks.

Categories : Java

Django Unittests Client Login: fails in test suite, but not in Shell
A test user should be created. Something like: def setUp(self): self.client = Client() self.username = 'agconti' self.email = 'test@test.com' self.password = 'test' self.test_user = User.objects.create_user(self.username, self.email, self.password) login = self.client.login(username=self.username, password=self.password) self.assertEqual(login, True) In your shell test the user is probably already present in your db, whereas in your unittest there might not be a user...

Categories : Python

Hudson build success, but Watir test suite has errors or failures
Try out this approach, Configure the Hudson in such a way that the console output of the Watir execution is stored/copied to the server. Now if the text "Failure" occurs in the output then send out a failure email notification using appropriate Hudson Configuration. There above approach is established using Jenkins, hope it can also be done in Hudson.

Categories : Misc

how can I get a quick count of the number of scenarios and steps in a large cucumber suite without running every test?
cucumber --dry-run will include the count of scenarios and steps for all features run. For example, Given two feature files: test.feature: Feature: 1 Scenario: 1a Given step 1 Given step 2 Scenario: 1b Given step 1 Given step 2 test2.feature: Feature: 2 Scenario: 2 Given step 1 When cucumber --dry-run is run, the results show: 3 scenarios (3 skipped) 5 steps (5 skipped) As you can see, the scenario and step counts are including all scenarios from all features.

Categories : Ruby

Installing AJDT I got a Plug-in org.eclipse.jdt.ui was unable to load class org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitEditor
I've got the same error after upgrading to Win 8.1. In my case the %workspace%/.metadata/.plugins directory was suddenly write-protected. To remove the lock I had to rename the directory, create a new directory name .plugins and finally the contents from the renamed to the new directory.

Categories : Eclipse

Error "Unable parse file browser.yml" when running a test with Magento automation test framework
I've resolved this by upgrading the Symfony Yaml package to the latest version: pear channel-discover pear.symfony-project.com pear install symfony/YAML Seems that the "---" syntax is supported on v. 1.0.6 (the latest stable at this moment).

Categories : Magento

Why does surefire ignore test categories for Scala specs2 test when run with maven surefire?
specs2 indeed doesn't know about Junit categories yet. In a meantime a workaround is to use tags: import org.specs2.specification.Tags @RunWith(classOf[JUnitRunner]) class MyTest extends Specification with Tags { section("NightlyTest") "My Test" should { "succeed" in { done } } } Then if you add, as system argument, -Dspecs2.include=NightlyTest only this specification will be executed. However this means that you cannot use the "groups" configuration from Surefire and you will have to use Maven profiles. Another option is to use all of JUnit/Maven infrastructure and reuse specs2 matchers only: import org.specs2.matcher._ @RunWith(classOf[JUnit4]) @Category(Array(classOf[NightlyTestCategory])) class MyTest extends JUnitMatchers { @Test def test = { 1 must_== 1

Categories : Scala

TFS 2012 API : Unable to add Test Point to Test run
After an insane amount of experimenting and beating my head against the wall, I've found the answer. For those who are curious, here's how it works: If I create the test run using ITestManagementService.TestRuns.Create(); I can add Test Cases but not Test Points. If I create the test run using ITestPlan.CreateTestRun(isAutomated); I can add Test Points but not Test Cases. I overcomplicated things a lot trying to get this working - I've now cleaned up a lot of the mess and have my application correctly reporting test results to TFS. I'm using a fake build more or less as described by Jason Prickett's blog. One thing I did find was that I couldn't define the run as an automated run because I have no test run controllers in my environment and couldn't find a way to move the test ru

Categories : Tfs

Remove WSDL from soapui project without removing test suite [To reduce size of SoapUI project file]
Now this was fun.. There are two ways through which you can reduce the size of the soapUI project XML. 1) Uncheck the caches and associated WSDLs locally for offline access and improved performance option under preferences>WSDL Settings like in the below image. This setting has no impact on existing projects just on projects that will be created after this option is checked/unchecked. Having this option unchecked can significantly reduce the size of the soapUI project. See the image below, both of these projects were created using the same WSDL and the size is reduced by half. I had another instance where the size reduce from ~400kb to 100 kb after this option was unchecked. 2) Edit the soapUI project xml, this is useful when the project exists and test suites have already been cr

Categories : Web Services

Where are the test for Scala collections?
This is a good question that has been asked before on SO. There are some collections tests under test/files/scalacheck and others under test/files/run/*coll* in the source repository. There is no conformance test or TCK per se for custom collections. Integration with collections usually involves a specific implementation requirement. For example, the ScalaDoc for immutable.MapLike tells you to implement get, iterator and + and -. In theory, if you test the template methods, you can rely on everything you get for free from the library. But the doc adds: It is also good idea to override methods foreach and size for efficiency. So if you care about that, you'll be adding performance tests too. The standard library doesn't include automated performance testing.

Categories : Scala

How to import .scala file into eclipse?
You have to create a Scala project first with the Scala plugin, specifying the directory of the .scala file as existing source. See this guide.

Categories : Eclipse



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.