Jitsi Synology

Posted on  by admin

Navigation: blog > 2020 > 03 > 18 > installing-jitsi-behind-a-reverse-proxy


You should be able to follow the instructions here to configure and run Jitsi using docker-compose in Synology CLI like on a normal Linux server. Just make sure the ports defined in the.env-file are unused to avoid conflict with Synology ports. After googling I found Jitsi and in few minutes I was able to setup a test environment on my computer using docker. I wanted to implement this on my Synology NAS (DS918) using docker containers. All explanations below are coming from the Jitsi github page, with a little customization to fit my needs.

Synology chat client

Mar 20, 2019 using letsencrypt certificate on the synology (not in jitsy), adding a specific subdomain meet.mydomain using local jitsi port 8080 and 8443, and set reverse proxy rules on the Syno redirecting 80 to 8080 and 443 to 8443 for the specified domain (meet.mydomain). This allows to maintain other services on 443 using other subdomains. Jitsi expert for a small task ($10-30 USD) Odoo Installation,configuration, Development and Support ($250-750 USD) Looking for Nginx expert ($15-25 USD / hour) Ubuntu + Active Directory/LDAP + Samba (AD or LDAP & samba on a synology nas) (€8-500 EUR) SSL security ($8-15 USD / hour) junior devops engineers required. (₹1500-12500 INR).

Update (April 2020): Since the first publication of this article, the “Jitsi configuration” section has been updated to reflect changes upstream. A “More about STUN servers section has been added as well.

Jitsi Synology

Videoconferencing with the official meet.jit.si instance has always been a pleasure, but it seemed a good idea to research how to install a Jitsi instance locally, that could be used by customers, by members of the local Linux Users Group (COAGUL), or by anyone else.

This instance is available at jitsi.debamax.com and that service should be considered as a beta: it’s just been installed, and it’s still running the stock configuration. Feel free to tell us what works for you and what doesn’t!

Networking vs. virtualization host

One host was already set up as a virtualization environment, featuring libvirt, managing LXC containers and QEMU/KVM virtual machines. In this article, we focus on IPv4 networking. Basically, the TCP/80 and TCP/443 TCP ports are exposed on the public IP, and NAT’d to one particular container, which acts as a reverse proxy. The running Apache server defines as many VirtualHosts as there are services, and acts as a reverse proxy for the appropriate LXC container or QEMU/KVM virtual machine.

Schematically, here’s what happens:

  • Both TCP/80 and TCP/443 ports on (the public IP) are NAT’d to the same ports on, the reverse proxy’s IP address on the default libvirt bridge (
  • Depending on the server name in the HTTP request, the apache2 service running in that container will proxy requests to the appropriate service in another container or virtual machine. This could be something like for a Jenkins service or something like for a Jitsi service.
  • As a general rule, the apache2 configuration only returns an HTTP redirection pointing at the same service over HTTPS, except for the .well-known/acme-challenge path, to make the certificate dance for Let’s Encrypt possible.

What does that mean for the Jitsi installation? Well, Jitsi expects those ports to be available:

  • TCP/443
  • TCP/4443
  • UDP/10000

For this specific host, TCP/4443 and UDP/10000 were available, and have been NAT’d as well to the Jitsi virtual machine directly. Given the existing services, the same couldn’t be done for the TCP/443 port, which explains the need for the following section.

The redirections set up on the TCP/80 port were already mentioned in the previous section, so let’s concentrate on the TCP/443 port part.

The ProxyPass and ProxyPassReverse directives act on /, meaning every path will be proxied to the Jitsi virtual machine. If one wasn’t using VirtualHost directives to distinguish between services, one could be dedicating some specific paths (“subdirectories”) to Jitsi, and proxying only those to the Jitsi instance. But let’s concentrate on the simpler “the whole VirtualHost is proxied” case.

The first SSLProxyEngine on directive is needed for apache2 to be happy with proxying requests to a server using HTTPS, instead of plain HTTP.

All other SSLProxy* directives aren’t too nice as they disable all checks! Why do that, then? The answer is that Jitsi’s default installation is setting up an NGINX server with HTTP-to-HTTPS redirections, and it seemed easier to directly forward requests to the HTTPS port, disabling all checks since that NGINX server was installed with a self-signed certificate. One could deploy a suitable certificate there instead and enable the checks again, instead of using this “StackOverflow-style heavy hammer” (some directives might not even be needed).

Jitsi configuration

Jitsi itself was installed on a QEMU/KVM virtual machine, running a basic Debian 10 (buster) system, initially provisioned with 2 CPUs, 4 GB RAM, 3 GB virtual disk. Its IP address is, which is what was configured as the target of the ProxyPass* directives in the previous section.

The installation was done using the quick-install.md documentation, entering jitsi.debamax.com as the FQDN, and opting for a self-signed certificate (letting the reverse proxy in charge of the Let’s Encrypt certificate dance, like it does for all VirtualHosts).

Update (April 2020): Since late March 2020, upstream switched from videobridge to videobridge2. Another important change is that the jitsi-meet-turnserver package is pulled through jitsi-meet’s Recommends, as can be seen in APT metadata (wrapped for readability):

TURN servers make it possible for clients to exchange streams in a peer to peer fashion when there are only two of them, by finding a way to traverse NATs. In the setup being documented here, the easiest is to not install the jitsi-meet-turnserver package (as documented recently in quick-install.md).

Now, a very important point needs to be addressed (no pun intended), which isn’t so much related to the fact one is running behind a reverse proxy, but related to the fact TCP/4443 and UDP/10000 ports are NAT’d: the videobridge component needs to know about that, and needs to know about the public IP and the local IP. In this context, the local IP is the Jitsi virtual machine’s local IP, where the NAT for TCP/4443 and UDP/10000 points to, and it is not the reverse proxy’s local IP. That’s why those lines have to be added to the /etc/jitsi/videobridge/sip-communicator.properties configuration file:

[ Hint: Beware, there’s another sip-communicator.properties configuration file, for the jicofo component! ]

Additionally, a default setting needs to be commented out (in the same file), because the TURN server isn’t installed:

Remember to restart the service:

Update (April 2020): Until late March 2020, this systemd server unit used to be called jitsi-videobridge instead.

More about STUN servers

A privacy-concerned user was kind enough to inform a number of Jitsi instance administrators (including us) that the default Jitsi configuration uses Google’s STUN servers. This was fixed through a recent pull request: config: use Jitsi's STUN servers by default, instead of Google's.

Without waiting for a new upstream release, administrators can tweak their local configuration (in /etc/jitsi/meet/F.Q.D.N-config.js). This can be checked client-side by running tcpdump and checking packets are seen when a 2-participant conversation is set up:

For completeness: Jitsi’s own infrastructure relies on Amazon Web Services at the moment.

Annex: host networking configuration

The relevant iptables rules on the host are the following (leaving aside the usual MASQUERADING which is required when using NAT):

Jitsi Synology Cloud

Published: Wed, 18 Mar 2020 10:15:00 +0100
Last modified: Thu, 02 Apr 2020 03:30:00 +0200

Jitsi Synology

Jitsi Synology Package

Jitsi Synology Setup

Jitsi-Meet for Synology beta
Publisher: TosoBoso
Version: 1.2Firmware: 6.2-0 and onward
Last update: 18/04/2021Size: 634.88 Kb
Architecture: all
required packages: Docker, Perl
JitsiMeet: more secure, flexible and free video conferencing based on WebRTC deployed on Synology via Docker. It extends the official docker-jitsi-meet packages by GUI for maintaining settings and local authentication without the need for ssh-cmdline.
2021-04-18 v.1.2 [Beta: Lower privilege mode (root->docker-jitsi) for scripts, UI: DSM7 ready]
2021-04-17 v.1.1 [Avoid loading unstable releases, suppress empty SENTRY variable warnings]
2021-04-04 v.1.0 [Replace latest by real release with option to downgrade, Websockets RProxy warning]
2021-03-20 v.1.0 [Version instead of latest & option via defresh for previous release]
2021-01-19 v.0.9 [Adjust url download check logic OK & HTTP 200i for env, yml files]
2020-09-12 v.0.8 [Health check cmd-line and UI, added Etherpad, Jibri, Jigasi orchestration]
2020-08-22 v.0.7 [Yml files from repo, imprint in jitsi-web, modding-script after refresh]
2020-06-12 v.0.6 [Minor fixes, option to disable welcome page, refresh as ui bg job]
2020-05-23 v.0.5 [Modify nginx to show real-ip from reverse proxy instead of docker ips]
2020-05-21 v.0.4 [Sync public IP, sync Synology Let's Encrypt certificate to Jitsi]
2020-05-16 v.0.3 [Initial stable build with UI to run cfg and jitsi-cli via browser]
2020-05-09 v.0.2 [Moderator user at install and via jitsi-cli added each container refresh]
2020-05-01 v.0.1 [initial build using jitsi-docker-meet repository extending cmds by jitsi-cli]
Roadmap: warning on used ports before container start, room usage logs, Jibri audit dependency
Visit the beta feedback and report web page
Visit the Help & How-To web page