| CARVIEW |
The data used comes from the TGCC jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived and will be made available soon. Congratulations to the winners and thanks to all the participants!
The winners
Parity game realizability
- Realizability: Knor
LTL realizability
- Realizability: ltlsynt
LTL synthesis
- Synthesis time: ltlsynt
- Synthesis quality: semml
LTLf realizability
- Realizability: ltlfsynt
Parity game Realizability

| Solver | SOLVED | WALLCLOCK TIME | CPU time | MEMORY |
|---|---|---|---|---|
| knor_npp | 429 | 6126 | 6063 | 63642 |
| knor_tl | 429 | 505 | 466 | 65872 |
| knor_tlq (DISQUALIFIED) | 429 | 6362 | 6298 | 67459 |
| ltlsynt | 359 | 55625 | 55375 | 12733 |
LTL Realizability

| SOLVER | SOLVED | WALLCLOCK TIME | CPU TIME | MEMORY |
| sdf | 897 | 54965.88 | 77963.8 | 130538 |
| semml_real | 1318 | 130507.16 | 169790.13 | 4196118 |
| ltlsynt-real-acd | 1328 | 71128.38 | 70476.148 | 551247 |
| ltlsynt-real-lar | 1330 | 96028.58 | 95261.66 | 731324 |
| ltlsynt-real-ds | 1225 | 148534 | 147510.032 | 796639 |
| acacia (DISQUALIFIED) | 1156 | 143432.74 | 405899.895 | 924200 |
LTL Synthesis

| SOLVER | SOLVED | WALLCLOCK TIME | CPU TIME | MEMORY | MEAN QUALITY SCORE |
| ltlsynt-acd | 1297 | 99166.52 | 98194.135 | 570252 | 1.46065726224849 |
| ltlsynt-lar | 1296 | 122680.21 | 121552.643 | 659468 | 1.44114796838032 |
| ltlsynt-ds | 1187 | 151797.75 | 150610.886 | 667925 | 1.58878496479921 |
| semml_small | 1293 | 153231.04 | 204678.48 | 4410877 | 1.91733469356956 |
| semml_fast | 1295 | 162855.38 | 217312.52 | 4252316 | 1.43764597172091 |
| sdf | 877 | 68625.23 | 94958.965 | 140973 | 1.33605459519285 |
| sdf-abc | 873 | 62592.85 | 90679.445 | 147208 | 1.58398914443439 |
LTLf Realizability

| SOLVER | SOLVED | WALLCLOCK TIME | CPU TIME | MEMORY |
| ltlfsynt-bfs-os | 402 | 21777.78 | 21630.707 | 121595 |
| ltlfsynt-bfs | 392 | 21677.56 | 21528.201 | 124140 |
| ltlfsynt-dfs-os | 401 | 18443.58 | 18312.239 | 105114 |
| Cosy-m1 (DISQUALIFIED) | 357 | 20287.27 | 20166.786 | 264615 |
| Cosy-m2 (DISQUALIFIED) | 356 | 11322.28 | 11181.704 | 156241 |
| LydiaSynt | 308 | 23019.55 | 22842.952 | 83175 |
Benchmarks and Solvers
The set of benchmarks used in 2025 can be fetched from the usual repository.
Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.
| Solver | Source repo | License | Report(S) to cite |
|---|---|---|---|
| ltl(f)synt | GitLab repo | GPL-3.0+ | Paper on ltlsynt, CIAA paper for ltlfsynt |
| Acacia bonsai | GitHub repo | GPL-3.0 | Paper |
| Knor | GitHub repo | GPL-3.0 | Paper |
| sdf | GitHub repo | MIT | arXiv report |
| LydiaSyft | GitHub repo | AGPL v3.0 | Paper |
| SemML | GitLab repo | GPL v3.0 | Paper |
| Cosy | TBA | TBA | Contact author |
the results to be presented at CAV 2025 and SYNT 2025.
This year, the competition will run with a new setup!
From 2025 onward, the competition will be run on the French Très Grand Centre de Calcul
du CEA (TGCC), https://www-hpc.cea.fr/tgcc-public/en/html/tgcc-public.html. To submit your solvers, things will be simplified to the creation of a docker image with your solver and you communicating to the organizers the repository address of the image.
In SYNTCOMP 2025, we will run the following tracks (based on the same rules and with the
same specification formats as in previous years):
– realizability and synthesis for LTL specifications in TLSF
– realizability for parity games in extended HOA and
– realizability for LTLf specifications in an extended version of the TLSF format.
Tools will be evaluated with respect to the number of solved benchmarks, and (for
synthesis) with respect to the size of their solutions.
Call for Benchmarks
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2025 if they are correctly submitted to our benchmark repository https://github.com/SYNTCOMP/benchmarks by June 1, 2025. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contain a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks.
Call for Solvers
Solvers for all tracks of the competition will be accepted until June 1. Your solver will
be included in SYNTCOMP 2025 if:
– the address of a working docker extension is communicated to us by June 1, 2025, and
– a technical description of the solver is submitted with it. (A pdf of 1-2 pages in LNCS
format please.)
Please use this docker image as the basis for your submission and contact us as soon as possible so that we can start testing it in the cluster. This is especially important if you prefer that we compile your tool in the server!
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ on https://www.syntcomp.org/. If you have any further questions or comments, please write to jacobs@cispa.de, guillermo.perez@uantwerpen.be, and philipp.schlehuber-caissier@telecom-sudparis.eu.
]]>The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!
The winners
Parity game realizability
- Sequential realizability: Knor
- Parallel realizability: Knor
LTL realizability
- Sequential realizability: SemML
- Parallel realizability: SemML
LTLf realizability
- Sequential realizability: Nike
- Parallel realizability: Nike
Parity game Realizability


| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| Knor | real_rtl | 458/458 | 421.28 | 421.96 |
| ltlsynt | pgreal | 378/378 | 10441.48 | 10442.41 |
LTL Realizability


| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| SemML | semml_real | 972/967 | 32740.56 | 24985.99 |
| Acacia bonsai | real | 768/788 | 31902.39 | 35682.57 |
| A.bonsai (k-d trees) | real | 734/750 | 27733.07 | 30132.75 |
| ltlsynt | segrealacd | 944/936 | 27596.39 | 27286.30 |
| ltlsynt | segrealds | 899/891 | 27256.03 | 26623.95 |
| ltlsynt | segreallar | 941/933 | 28680.57 | 28347.82 |
| sdf | real | 840/838 | 30893.46 | 31067.37 |
| sdf | real_vg_hors_concours | 839/838 | 29815.81 | 31876.83 |
LTLf Realizability


| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| LydiaSyft | default.sh | 220/220 | 6882.87 | 6887.78 |
| LydiaSyft | print_strategy_in_file.sh | 220/220 | 6888.98 | 6893.82 |
| Nike | bdd_ff | 230/230 | 3972.70 | 3943.48 |
| Nike | bdd_random | 181/181 | 18472.69 | 18464.63 |
| Nike | bdd_tt | 225/225 | 5319.70 | 5283.62 |
| Nike | default | 234/234 | 5326.30 | 5326.69 |
| Nike | hash_random | 137/137 | 12187.83 | 12167.53 |
| Nike | hash_tt | 231/231 | 6464.69 | 6462.08 |
| lisa | seqreal1.sh | 209/209 | 5649.48 | 5645.42 |
| lisa | seqreal2.sh | 192/192 | 8297.10 | 8295.19 |
| lisa | seqreal3.sh | 190/190 | 7642.33 | 7646.71 |
| ltlfsynt | segltlfrealbase | 77/77 | 22988.60 | 22990.84 |
| ltlfsynt | segltlfrealopt | 77/77 | 22951.00 | 22952.93 |
| ltlfsynt | segltlfsyntbase | 78/78 | 24793.56 | 24795.99 |
| ltlfsynt | segltlfsyntopt | 77/77 | 23028.29 | 23030.55 |
| tople | default | 131/131 | 9375.09 | 9339.21 |
Benchmarks and Solvers
The set of benchmarks used in 2024 can be fetched from the usual repository.
Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.
| Solver | Source repo | License | Report to cite |
|---|---|---|---|
| ltlsynt | GitLab repo | GPL-3.0+ | Paper |
| Acacia bonsai | GitHub repo | GPL-3.0 | Paper |
| Knor | GitHub repo | GPL-3.0 | Paper |
| sdf | GitHub repo | MIT | arXiv report |
| Nike | GitHub repo | AGPL v3.0 | arXiv report |
| LydiaSyft | GitHub repo | AGPL v3.0 | report |
| Lisa | GitHub repo | GPL v3.0 | arXiv report |
| tople | Custom: all rights reserved | Contact authors | |
| SemML | GitLab repo | GPL v3.0 | Contact author |
New this year: The LTLf specification semantics have been updated so that the controller is expected to dictate the end of a trace. Please contact the organizers if this is not clear.
Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2024 if they are correctly submitted to our benchmark repository by June 1, 2024. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2024 if it is correctly uploaded into StarExec by June 1, 2024.
* Communication *
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland, guillermo.perez@uantwerpen.be, or philipp@lrde.epita.fr
]]>All graphs and tables were obtained as described here. The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!
The winners
Parity games
- Sequential realizability: Knor
- Parallel realizability: Knor
- Sequential synthesis: ltlsynt
- Parallel synthesis: Knor
- Sequential synthesis (quality): Strix
- Parallel synthesis (quality): Strix
LTL
- Sequential realizability: Strix
- Parallel realizability: Strix
- Sequential synthesis: Strix
- Parallel synthesis: Strix
- Sequential synthesis (quality): Strix
- Parallel synthesis (quality): Strix
LTLf
- Sequential realizability: Nike
- Parallel realizability: Nike
Parity game Realizability
| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| Knor repair | real_tl | 876/876 | 1035.45 | 648.11 |
| Strix PGAME | hoa_realizability_parallel | 800/835 | 61377.20 | 65541.26 |
| Strix PGAME | hoa_realizability_sequential | 802/802 | 50668.25 | 49536.76 |
| ltlsynt | pgreal | 733/733 | 39810.50 | 39811.10 |
Parity game Synthesis
| Solver | Config | Solved (seq/par) | CPU time (s) | Wallclock (s) | Score (seq/par) |
|---|---|---|---|---|---|
| Knor repair | synt_sym | 282/282 | 3257.02 | 3257.45 | 439.37/439.37 |
| Knor repair | synt_sym_abc | 283/283 | 3846.52 | 3568.78 | 489.49/489.49 |
| Knor repair | synt_tl | 272/272 | 9.66 | 9.25 | 289.48/289.48 |
| Strix PGAME | hoa_synthesis_parallel | 285/285 | 4596.27 | 4359.80 | 554.48/554.48 |
| Strix PGAME | hoa_synthesis_sequential | 284/284 | 4562.27 | 4350.23 | 552.65/552.65 |
| ltlsynt | pgsynt | 292/292 | 1687.63 | 1672.48 | 464.30/464.30 |
| ltlsynt | pgsyntabc1 | 294/294 | 1718.05 | 1720.58 | 504.33/504.33 |
LTL Realizability
| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| SPORE | ltl-real-recpar-bdd-seq | 479/479 | 30222.87 | 30201.10 |
| SPORE | ltl-real-recpar-fbdd-32-nodecomp-seq | 666/666 | 35025.32 | 34928.26 |
| SPORE | ltl-real-recpar-fbdd-32-seq | 588/588 | 31316.82 | 31223.80 |
| SPORE | ltl-real-recpar-fbdd-64-nodecomp-seq | 660/660 | 30986.12 | 30956.60 |
| SPORE | ltl-real-recpar-fbdd-64-seq | 600/600 | 33176.62 | 33168.87 |
| Strix | ltl_real_acd_bfs | 914/914 | 31507.38 | 31516.81 |
| Strix | ltl_real_zlk_bfs | 921/921 | 34268.65 | 34271.52 |
| Strix | ltl_real_zlk_pq | 948/948 | 35447.26 | 35409.72 |
| abonsai | real | 703/716 | 26947.08 | 26005.19 |
| ltlsynt23 | segrealacd | 845/845 | 39379.60 | 39370.62 |
| ltlsynt23 | segrealds | 809/809 | 32401.23 | 32353.34 |
| ltlsynt23 | segreallar | 845/845 | 35477.35 | 35460.15 |
| sdf | real | 782/786 | 23287.54 | 24888.38 |
| sdf | real_p | 709/723 | 36894.35 | 39242.12 |
LTL Synthesis
| Solver | Config | Solved (seq/par) | CPU time (s) | Wallclock (s) | Score (seq/par) |
|---|---|---|---|---|---|
| Otus | otus-ltl-synthesis-parallel-sylvan | 330/332 | 7791.82 | 3340.19 | 264.98/266.96 |
| Otus | otus-ltl-synthesis-sequential-jbdd | 327/327 | 2248.72 | 2235.18 | 265.37/265.37 |
| Strix | ltl_synth_acd_bfs | 472/472 | 11715.48 | 11701.96 | 863.33/863.33 |
| Strix | ltl_synth_zlk_bfs | 490/490 | 15849.71 | 15805.41 | 895.74/895.74 |
| Strix | ltl_synth_zlk_pq | 488/488 | 12977.18 | 12917.15 | 888.37/888.37 |
| abonsai | synt | 382/385 | 11886.38 | 7609.35 | 573.52/578.05 |
| ltlsynt23 | segsyntacdabc | 416/416 | 9392.32 | 9395.10 | 573.13/573.13 |
| ltlsynt23 | segsyntdsabc | 412/412 | 5319.05 | 5313.80 | 614.11/614.11 |
| ltlsynt23 | segsyntlarabc | 413/413 | 9472.05 | 9456.58 | 559.24/559.24 |
| sdf | synt | 415/417 | 4007.84 | 5611.84 | 493.06/496.44 |
| sdf | synt_p | 381/383 | 12838.66 | 11905.57 | 431.96/435.34 |
LTLf Realizability
| Solver | Configuration | Solved (seq/par) | CPU time (s) | Wallclock time (s) |
|---|---|---|---|---|
| LydiaSyft | default.sh | 255/255 | 3885.47 | 3890.82 |
| LydiaSyft | print_strategy_in_file.sh | 255/255 | 3964.93 | 3969.52 |
| Nike | bdd_ff | 264/264 | 4621.82 | 4617.48 |
| Nike | bdd_random | 235/235 | 40752.96 | 40766.24 |
| Nike | bdd_tt | 260/260 | 5832.94 | 5820.82 |
| Nike | default | 269/269 | 6383.23 | 6364.09 |
| Nike | hash_random | 177/177 | 30639.28 | 30647.51 |
| Nike | hash_tt | 266/266 | 9680.75 | 9678.20 |
| lisa-syntcomp | seqreal1.sh | 225/225 | 7078.87 | 7081.29 |
| lisa-syntcomp | seqreal2.sh | 206/206 | 9674.94 | 9664.04 |
| lisa-syntcomp | seqreal3.sh | 205/205 | 9147.83 | 9152.23 |
| ltlfsynt23prebuild | segltlfrealbase | 85/85 | 18451.05 | 18453.08 |
| ltlfsynt23prebuild | segltlfrealopt | 85/85 | 18422.37 | 18415.12 |
| ltlfsynt23prebuild | segltlfsyntbase | 85/85 | 18555.46 | 18540.76 |
| ltlfsynt23prebuild | segltlfsyntopt | 85/85 | 18503.75 | 18505.74 |
| tople | default | 147/147 | 15514.12 | 15523.45 |
Benchmarks and Solvers
The set of benchmarks used in 2023 can be fetched from the usual repository.
Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.
| Solver | Source repo | License | Report to cite |
|---|---|---|---|
| ltlsynt | GitLab repo | GPL-3.0+ | |
| Acacia bonsai | GitHub repo | GPL-3.0 | Paper |
| Knor | GitHub repo | GPL-3.0 | arXiv report |
| sdf | GitHub repo | MIT | arXiv report |
| Nike | GitHub repo | AGPL v3.0 | arXiv report |
| LydiaSyft | GitHub repo | AGPL v3.0 | report |
| Lisa | GitHub repo | GPL v3.0 | arXiv report |
| tople | Custom: all rights reserved | Contact authors |
New this year: LTLf specifications in an extended version of the TLSF format! If you are interested, please contact us for more details on the exact semantics and the format we will be using for this track.
Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2023 if they are correctly submitted to our benchmark repository by June 1, 2023. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2023 if it is correctly uploaded into StarExec by June 1, 2023.
* Communication *
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland, guillermo.perez@uantwerpen.be, or philipp@lrde.epita.fr
]]>Rankings
We present first the results for the parity-game tracks and then move to the LTL tracks.Parity games: realizability
To summarize, the ranking of the participating tools is as follows.- Knor
- Strix
- ltlsynt
| Solver | Configuration | Solved benchmarks | Total CPU time (s) | Total Wallclock time (s) |
|---|---|---|---|---|
| Strix | hoa_realizability_parallel | 548 | 308532 | 124160 |
| Strix | hoa_realizability_sequential | 515 | 89584 | 89232 |
| ltlsynt | pgreal | 441 | 69416 | 69421 |
| Knor | real_ltl | 587 | 1722 | 1039 |


Parity games: synthesis
To summarize, the time-based ranking of the participating tools is the same as for the realizability sub-track. More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.| Solver | Configuration | Solved benchmarks | Total CPU time (s) | Total Wallclock time (s) |
|---|---|---|---|---|
| Strix | hoa_synthesis_parallel | 470 | 159456 | 73077 |
| Strix | hoa_synthesis_sequential | 451 | 48321 | 47884 |
| ltlsynt | pgsynt | 334 | 69254 | 69250 |
| ltlsynt | pgsyntabc1 | 335 | 69386 | 69440 |
| Knor | synt_sym | 278 | 4839 | 4839 |
| Knor | synt_sym_abc | 278 | 5706 | 5401 |
| Knor | synt_tl | 480 | 1695 | 994 |


- Strix
- ltlsynt
- Knor
| Solver | Configuration | Synthesized controllers | Score |
|---|---|---|---|
| Strix | hoa_synthesis_parallel | 199 | 385 |
| Strix | hoa_synthesis_sequential | 199 | 384 |
| ltlsynt | pgsynt | 204 | 321 |
| ltlsynt | pgsyntabc1 | 205 | 358 |
| Knor | synt_sym | 197 | 292 |
| Knor | synt_sym_abc | 197 | 340 |
| Knor | synt_tl | 201 | 268 |
LTL: realizability
To summarize, the ranking of the participating tools is as follows.- Strix
- ltlsynt
- Acacia bonsai
- sdf
- SPORE
- Otus
| Solver | Configuration | Solved benchmarks | Total CPU time (s) | Total Wallclock time (s) |
|---|---|---|---|---|
| _acab_sb | best | 826 | 211030 | 86632 |
| SPORE | ltl-real-recpar-bdd-seq | 519 | 67800 | 67830 |
| SPORE | ltl-real-recpar-fbdd-32-nodecomp-seq | 715 | 71950 | 71960 |
| SPORE | ltl-real-recpar-fbdd-32-seq | 646 | 69159 | 69183 |
| SPORE | ltl-real-recpar-fbdd-64-nodecomp-seq | 715 | 72367 | 72309 |
| SPORE | ltl-real-recpar-fbdd-64-seq | 646 | 68257 | 68305 |
| Strix | ltl_real_acd_bfs | 892 | 43963 | 43974 |
| Strix | ltl_real_zlk_bfs | 893 | 39510 | 39518 |
| Strix | ltl_real_zlk_pq | 924 | 46340 | 46294 |
| Otus | otus-ltl-realizability-parallel-sylvan | 588 | 227382 | 57363 |
| Otus | otus-ltl-realizability-sequential-jbdd | 558 | 32009 | 32011 |
| sdf | real | 783 | 50466 | 40399 |
| sdf | real_p | 734 | 129252 | 72772 |
| ltlsynt | segrealacd | 831 | 67066 | 67064 |
| ltlsynt | segrealds | 800 | 55803 | 55810 |
| ltlsynt | segreallar | 828 | 57983 | 57956 |


LTL: synthesis
To summarize, the time-based ranking of the participating tools is as follows.- Strix
- sdf
- ltlsynt
- Otus
| Solver | Configuration | Solved benchmarks | Total CPU time (s) | Total Wallclock time (s) |
|---|---|---|---|---|
| Strix | ltl_synth_acd_bfs | 820 | 30218 | 30195 |
| Strix | ltl_synth_zlk_bfs | 818 | 23009 | 23012 |
| Strix | ltl_synth_zlk_pq | 845 | 29458 | 29380 |
| Otus | otus-ltl-synthesis-parallel-sylvan | 503 | 125103 | 31644 |
| Otus | otus-ltl-synthesis-sequential-jbdd | 490 | 27739 | 27705 |
| ltlsynt | segsyntacdabc | 709 | 50445 | 504463 |
| ltlsynt | segsyntdsabc | 689 | 46082 | 46196 |
| ltlsynt | segsyntlarabc | 703 | 49251 | 49337 |
| sdf | synt | 721 | 47772 | 37822 |
| sdf | synt_p | 666 | 112532 | 61328 |


- Strix
- ltlsynt
- sdf
- Otus
| Solver | Configuration | Synthesized controllers | Score |
|---|---|---|---|
| Strix | ltl_synth_acd_bfs | 433 | 785 |
| Strix | ltl_synth_zlk_bfs | 432 | 780 |
| Strix | ltl_synth_zlk_pq | 430 | 772 |
| Otus | otus-ltl-synthesis-parallel-sylvan | 305 | 243 |
| Otus | otus-ltl-synthesis-sequential-jbdd | 298 | 238 |
| ltlsynt | segsyntacdabc | 375 | 500 |
| ltlsynt | segsyntdsabc | 377 | 540 |
| ltlsynt | segsyntlarabc | 371 | 487 |
| sdf | synt | 382 | 447 |
| sdf | synt_p | 354 | 400 |
Benchmarks and Solvers
The set of benchmarks used in 2022 can be fetched from the usual repository. Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.| Solver | StarExec link | Source repo | License | Report to cite |
|---|---|---|---|---|
| ltlsynt | StarExec | GitLab repo | GPL-3.0+ | |
| Acacia bonsai | GitHub repo | GPL-3.0 | arXiv report | |
| Knor | GitHub repo | GPL-3.0 | arXiv report | |
| sdf | GitHub repo | MIT | arXiv report |
The competition will run on StarExec, as in 2020 and 2021. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in this year’s SYNTCOMP if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by July 1. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until July 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in this year’s SYNTCOMP if it is correctly uploaded (i.e. passes a test run) into StarExec by July 1.
* Communication *
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be
]]>This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications in extended HOA. Since we had an increase in participants, this year we had sequential and parallel versions of the realizability and synthesis sub-tracks! Also worth mentioning is the reorganization of our benchmark library.
Winners
Knor won the parity-game synthesis track, solving 276/303 benchmarks. It also won the hard-parity-game realizability track by solving 216/217 benchmarks.
Strix placed first in the quality ranking for parity-game synthesis. In its LTL/TLSF version, Strix won the LTL sequential realizability track by solving 827/924 benchmarks; and placed first in the quality ranking for LTL sequential synthesis.
sdf won the LTL parallel realizability track, solving 730/942 benchmarks, and the LTL parallel synthesis track.
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!
]]>The competition will run on StarExec, as in 2020. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis, and
into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2021 if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by June 1, 2021. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2021 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2021.
* Communication *
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be
]]>








