Hello all again! I am happy to announce that the project is fully implemented and has reached the review and testing stage. In the following paragraphs I will go through the parts of the system and in the end I will describe how it shall be used.
Introduction of the project
The general purpose of the project is getting early feedback for every newly submitted code for Joomla extensions. By creating a fully working environment to run tests for Joomla, so that they can run in a parallel (container-based) environment, increasing speed and coverage of PHP and Joomla versions. The key requirement here is speed, it is crucial to get early feedback on new PRs. In order to achieve this, tests not dependent on each other will be run on different containers simultaneously.
The scope of the project is to integrate the Joomla! Weblinks Extension existing tests into the new testing environment. Therefore, pre-installed, Joomla containers need to be used in order to be able to run the tests in a short time.
The expected results are to automate the testing environment from its creation (by running containers with Joomla/PHP combinations and testing selenium containers), to parallel test execution (by coordination test runs in parallel and taking into account their dependencies) and reporting (storing logs and screenshots for each failed test).
The virtualisation repository has been updated and now supports the Memcached container. The way virtualisation works, is by having 4 default xml configuration files:
Each of them defining configuration for the database, the Joomla servers, memcached, selenium containers and last but not least, the network they all work on.
In order to be easily used in other projects (such as this one), an API has been defined for the virtualisation project. The API expects and environment configuration which is used to extend and override the configuration in the above defined xmls.
This would be a typical environment configuration:
$env = array( 'php' => ['5.4', '5.5', '5.6', '7.0', '7.1'], 'joomla' => ['3.6'], 'selenium.no' => 3, 'extension.path' => $tmpDir . '/extension', 'host.dockyard' => '.tmp/dockyard', );
In this way, all the containers needed for parallel testing are created.
The selection list main purpose is to load and maintain the order in the tests needed to be run for an extension. It makes sure that it serves for running only test ready for execution, ensuring their dependencies already succeeded, or marking the task as failed otherwise.
In order to maintain the test’s’ status, three(four) flags are used:
final class Flag const NO_FLAG = 0; const ASSIGNED = 1; const EXECUTED = 2; const FAILED = 3;
Now, before the selection list is able to load the tests, the extension should have defined a tests.yml file for their acceptance tests. The file should contain all the tests needed to be run, and the indentation represents the dependencies between tests. If a test has two disjunct dependencies, then it needs to be written twice, indented accordingly to the required dependencies. For Weblinks, there is no such case, but the solution is presented nevertheless. Below is an example of a shortened tests.yml file for weblinks:
install/InstallWeblinksCest.php:installWeblinks: administrator/AdminCategoriesCest.php:adminCreateCategoryWithoutTitleFails: administrator/AdminCategoriesCest.php:adminCreateCategory: administrator/AdminCategoriesCest.php:adminPublishCategory: administrator/AdminCategoriesCest.php:adminUnpublishCategory: administrator/AdminCategoriesCest.php:adminVerifyAvailableTabs: administrator/AdminWeblinksCest.php:adminCreateWeblink: administrator/AdminWeblinksCest.php:adminTrashWeblink:
A recursive read function has been defined to read the yml file and store the tests data in memory. The data is not normalised, for the ease of future operation, tasks being stored both in a simple list (for the ease of checking their dependencies) and in a map having their flag as key (for the ease of managing the selection process). An exponential improvement in complexity has been made with just a scalar increase in memory (*2).
After the read has been done, the next important responsibility of the selection list is to “pop()” tests ready for execution. If there is one task that has no flag and all its dependencies is executed then it is returned, otherwise, “false” is returned.
Because all the behaviour of this project (tests run) is async, an isFinished method has been defined. This gives us the information when to write in the logs the overall results of the tests for each selection list, which ultimately represents a server.
The main coordinator class had been named MCS, short for MainCoordinatorStatic. This is because all of its methods are static. Tests need to run asynchronous, therefore, in order to keep the information required for their execution, such as the selection lists, a cache (Memcached) is used a persistent storage. Therefore, MCS queries the cache every time it requires to perform actions on the selection lists and writes the information back afterwards. In order to avoid the concurrency issues, a simple locking system has been implemented.
The first responsibility of MCS is to prepare the data required for tests execution. First, it loads the selection lists, then it creates the runQueue and manageQueue. As discussed in the initial proposition, runQueue stores the tasks that are ready for execution, but in the same time is limited to the number of clients. manageQueue is used to keep the tests coverage balanced across the servers, therefore ensuring maximum efficiency. Furthermore, codeception configuration files are created for each server in order to reuse the same weblinks clone and store failed screenshots and logs separately.
With the use of the virtualisation API, the testing environment is created and started. Before tests are running, MCS waits for the database initialisation to finish.
Lastly, MCS is in charge of filling the execution queue and running the available tasks. The filling of the execution queue is done until the maximum capacity is reached, and the tests from the server who last executed one have priority. The running of tasks is performed as long as clients (selenium containers) are available and as long as tasks in the execution queue are available.
In order to make the tests async, a robo task had to be created. The execution of a task is basically a “docker exec” command which runs a codeception test inside a client container. The result is then verified and in case of success, the test is marked with the “executed” flag and the whole fill and run flow is reloaded, all async, without knowing the status of the other clients or tests at the moment. In case of failure, an additional log is stored with all the codeception output.
The design of the project implies it to be run by Travis or other CI tool. Therefore a robo task would be best suited to the job. It expects as arguments the repository owner, name and branch and shall be used as follows:
vendor/bin/robo run:coordinator isacandrei weblinks container-test
In this version, the environment configuration needs to be defined inside the implementation of the robo task, but later it shall be defined externally in order to allow the different needs of testing.
For me, this project was a challenge, but this is why I chose it in the first place. The one thing that caught my attention was Docker. I am a fan of containerisation and I think it is one of the best ways to automate the deployment of a system. Another thing that I liked about the project was that it required to conceive an algorithm that runs tasks in parallels.
So, being stuck with a hard project, I got to work, the first month was hard, but after having understood the deeper knowledge behind the project concept, it came easier to design and write the actual code.
I believe that this project helped me become a better software engineer and gave me a better insight on the open source world.
جهت کسب اطلاعات بیشتر به جوملا فارسی مراجعه نمایید