March 2017 update

How and why are questions we ask ourselves in the office everyday, and sometimes we find some answers that we like.
This post will try to describe some of the general changes for the new strucure we think will be best for our communication solution.

  • The new Structure is designed to minimize the number of global server that are in the system. And prevent the posibility of a global server takedown which could make our system become unstable/ not-working.
  • And to prevent server spoofing in case someone wants to get data from one Seequre instance and tries to spoof another with fake credentials.

What we found with our current structure was that the Signalr server could be damaged, which would cause problems in some Seequre instances.

 

Current Implementation overview

Current Implementation has the following structure
We have 1 `Signalr Server` server which we need to refactor, other parts in current structure are ok.

 

Quick overview:

MDaemon Oauth and External Oauth servers allow us to authorize users. MDaemon does authorization via installed on machine MDaemon server by using same credentials. External allow users from other mail agents login inside Seequre.

Seequre Server (SVC Server) is an instance that handles all user request – so it contains business logic of this application.

 

The Database just stores information.

Common server is a server where new svc instance can request information about other instances and also give info about its own to other Seequre instances lists. Current realization was quick enough in implementation.

 

Common Server

This server handles a list of registered server and notifies other servers about all new instances.

This server stores list of all available servers. So any server can get info about other servers from this one. This server shares instances list of existing installations so each instance has its own list of svc instances ( this list always contains the latest info ) when we start svc instance it`ll communicate with common server and notify it about availability. Then common server notifies all instances in the list about information which he just got.

 

SignalR

Server responsible for real-time communication.
Each server will have self-hosted signalrServer ( no need to open any port on machine just start default svc instance )

When users start video conference it will include initiator server – which will be responsible for WEBRTC communications.

Example Flow Logic:

  • User 2 get sync notification about new conference.
  • User2 connect to user1 server and start signaling with user1
  • signaling, server transfers offers, answers and candidates for each other.
  • User2 handle connection with user1 server during the whole conference. ( User2 will close the connection only when the conference ends)
  • When user1 reconnect to the conference signaling will pass via server and user2 is still online.
  • When user2 reconnect to the conference signaling will pass via user1 server so user2 will reconnect to this server.
  • In case when user1 server goes down during the conference the video conference will be available, but the text chat won’t.

 

Signalr server contains 2 hubs…

  1. Internal hub which handles realtime communication for current svc instance ( example of events like `post message`, `update profile`, `new group` ect … ) . Also internal syncer process connected to this hub and notify clients about any real-time changes which it got from other servers.
  2. Signaling hub handles webRTC signaling when users start video conferencing. It resends sdp messages from one user to another.

 

Syncer In/Out Data

Syncer will be connected to each other via realtime connection. So when you start svc instance it`ll start syncer – which will create connection to each server in the list and when a new server comes syncer will start a connection with this server.

User Does any actions on his server ( post message / change group title, ect … ) data will be stored in its own database as it ‘s now. Then local syncer will notify syncers on other servers about new changes. Other syncers will handle changes and store in self-databases and notify clients.

Common view for real-time actions and sync logic.

User does Action -> Server Change DB -> call syncer -> syncer notifies other servers -> other servers get new changes and store in own db -> then notify their own clients about updates.

In this case when some server is down Syncer will try to connect but the connection will fail and it`ll wait for the reconnection. So when the server starts it`ll start connection for all existing server instances.

 

Changes on client side.

Small fixes for migration real-time hub.

 

Security CHANGES

We need prevent case when someone will try to hack our svc instances and get security data which stored in database…

For this case, we`ll use encrypted hash authorization between hosts.

When server starts it generates public key and sends it to the common server. Common server shares public key for other Seequre instances. When some instance request data it`ll send public key from requested server.

When requested server got request validate key it will send a ping request to host which send this request. If ping request is successful it`ll send data to this host.

  1. host 1 generates pair of private and public keys
  2. posts to common server (via https) his license credentials and the generate public key
  3. common server notifies other hosts about new host in network
  4. all hosts request updates in host table from common server
  5. common server responses with table in form:
    • ID – (GUID or unique char sequence)
    • Host Url
    • Host Token Key = ID of requesting server encrypted using Public key of a host.

 

During inter-host connection negotiation, e.g. Host A connects to Host B:

  1. host A sends to host B a request with :
    • ID of host A
    • Host Token Key for host B according to hosts table for host A (this token is encrypted with key of host B)
  2. host B decrypts Host Token Key using private key
  3. host B compares decrypted value with the supplied ID

if ok, opposite connection should be established in the same way. After it’s completed, host A knows if he did an initial negotiation request to host B

All communication between instance via https https://www.websequencediagrams.com/#open=219324

Leave a Reply