Improving the quality of Jami


  • It is harder to make unit-test on Jami project because of the race conditions on multi-level dependance.

  • There is about 30 unit-tests and 26% coverage. Due to Jami high demand to give new functionalities to user quickly, they are not maintained by the developers or by a QA dept.

  • We use lcov for the coverage, you can find the lcov’s configuration in the daemon’s Also, the coverage can be found at

  • A system needs to be implemented to start convincing the team to make a unit-test for new code before merging

  • You can launch them by doing ‘make check’ in the daemon folder or separately in the unit-test folder with gdb: ‘gdb ut_media_encoder’

  • The environment needs to be set with ‘–disable-shared’ during the ’./configure’ command

Framework Tests

  • You can find framework tests in the daemon’s and lunch it with ‘make integration’. This calls script in the tools/dringctrl folder. It uses and which let you control Jami through bash.

  • This makes a series of calls to assure jami’s opendht network is stable.

  • Other framework tests need to be implemented in the future to tests Jami’s functionalities as a whole.

Integration tests

  • Each commit goes through integration tests in dockers on the build machines you can find the details at:

  • Code-review is made by a fellow developer, sometimes the code is reviewed by the same developer, this should be avoided to emphasize Linus’ law. The ‘Jenkins verified’ label is sometimes discarded and replaced by +1 from a developer, this should also be avoided.

  • Sonarqube lets Jenkins build Jami and verify linting. You can find filters and results at: sonar- Sonar uses clang-tidy as a preprocessor linting compilator, you can find clang’s filters in .clang-tidy file in the daemon folder.

  • On sflvault sonarqube can be found at service m#2637 and admin logins at service s#7169

Doc and feedback:

  • You can find all the documentation on

  • Issues are made by developers or users on


  • A script is called every 30 minutes on a virtual machine jami-monitorpeervm-01. You can find it on sflvault service s#7209 and is calling an other client viratual jami- monitorpeer-02 (service s#7224). A series of calls is being made and it returns the failure rate. You can find all the details at

  • If needed, the manual command is ./ –peer 031acbb73f2a3385b2babc7161f13325be103431

  • It traces a real time point by point graph on

Smoke tests

Before each releases every clients MUST past a list of scenarios.

Scenarios are described here: Jami smoke tests

They are reviewed by QA dpt. before sending it to the developers if needed.

If a release contains a network commit that has been merged, the QA dept. Should be able to automate the different connectivity tests (as descibed below in Calls configurations)

Calls configurations.

This is the list of network configurations that need to be tested:

(IPv4 | IPv6) + (TURN | !TURN) + (STUN | !STUN) + (UPnP | !UPnP) for both sides.

If both sides are IPv4 only without TURN/STUN/UPnP, the call should be only local.

Special note: FDroid

The script to generate MR is in the client-android repo (

What needs to be done

  • Push coverage closer to 60%

  • Establish a system within the team to assure maintenance and creation of unit-tests.

  • Each major functionality should be tested as whole by adding a framework test (i.e. making sure a message was received, the call was ended well on both side, etc…)

  • Each new functionality should be tested on each platform before merging it to reduce regression

  • Integrate sonarqube on each client

  • Automate the testing of Jami’s behavior on network compatibility

  • Make a script adaptable to windows also