Skip to main content
Skip table of contents

Vidispine Server Agent [VC 21.3 GEN]

The Vidispine Server Agent, VSA, is a daemon process running on servers connecting to a Vidispine Server, VS. VSA is composed of a Vidispine Transcoder and the VSA supervisor.

How to install VSA


  • A running VS instance, version 4.4 or newer

  • A server running Ubuntu 14.04 or higher, 64-bit, or CentOS 6.5 or higher, 64-bit


Add the Vidispine repository according to the documentation on repository. Then you can install and start VSA. With Ubuntu/Debian:

$ sudo apt-get install vidispine-agent vidispine-agent-tools

With CentOS/RedHat:

$ sudo yum install vidispine-agent vidispine-agent-tools

After that, the agent can be connected to Vidispine server.

Connecting to Vidispine

The agent can then be connected either with or without establishing an SSH tunnel to Vidispine server. The latter should be used if an encrypted network connection has already been established to Vidispine server, or if the server and the agent runs within the same network.

Connecting with SSH tunnel

The configuration files are located in /etc/vidispine/. Configuration can be stored in either the file agent.conf in this directory, or in files in the subdirectory agent.conf.d. It is recommended that a file is created in the agent.conf.d directory. Specifically, there are two setting that has to be set: the connection to VS, and the unique name of the VSA server. The first one you will get from the Vidispine instance.

  1. Enable the Vidispine VSA port, by adding this to the server.yaml file (change the port number as necessary). The server will need to restart for any changes to take effect.

       bindPort: 8183

    This step is new in Vidispine 4.6.

  2. On the Vidispine instance, install the vidispine-tools package and run

    $ sudo vidispine-admin vsa-add-node

    In Vidispine 4.6, the command has changed to vsa-add-node. With the new vsa-add-node command, one VSA can connect to multiple vidispine-servers.

  3. Fill in the user name, password and IP address. Enter the unique name, but you can leave the UUID empty.

  4. Now, on the VSA server, add this information to /etc/vidispine/agent.conf.d/connection.

  5. Start VSA:

    $ sudo service vidispine-agent start
    $ sudo service transcoder start
  6. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status

    Agent, transcoder and Vidispine should all be ONLINE.

Connecting without SSH tunnel

  1. Create a file /etc/vidispine/agent.conf.d/custom.conf with content like:

    • userName: Vidispine user name.

    • password: Base64 encoded value of a *** prefixed password. For example, the value should be the result of echo -n ***admin | base64, if the password is admin.

    • directVSURI: the address VSA uses to connect to Vidispine server.

    • vsaURI: the address that can be used by Vidispine server to connect to VSA

  2. Restart VSA:

    $ sudo service vidispine-agent restart
  3. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status

    Agent, transcoder and Vidispine should all be ONLINE.

  4. Also, the VSA should listed under the server:

    $ curl -X GET -uadmin:admin http://localhost:8080/API/vxa

Adding a share

On the VSA, run the following command:

$ sudo vidispine-agent-admin add-local-share

This will add a share in VSA, and create a storage in VS. You can verify this by listing the storages (List all storages). The storage is listed with a method that has a vxa: URI scheme. The UUID (server part) of the URI matches the UUID from vidispine-agent-admin status.

If the share is removed from the VSA, the storage will be automatically deleted from VS, including all file information (but not the files themselves). In order to keep the storage, e.g., if the storage is moved from one VSA to another, remove the vxaId metadata field from the storage.

Enable write access

When a new share is added, the storage method is marked as read-only. To enable writing to the share:

  • set the write field of the method to true, and

  • change the storage type to LOCAL (meaning it can be a target for all file operations)

Associate many VSAs to one storage

It is possible to have several VSA nodes serving one shared file system. This can be used for increasing transcoding capability or to generated redundancy.

  1. Add the share individually on all VSAs (see above). This will generate as many storages as there are VSAs.

  2. Now copy the storage methods from all but the first storage to the first storage.

  3. On the first storage, remove the vxaId storage metadata (see above).

  4. Remove all but the first storage.

VSA and S3 credentials

A VSA transcoder can be given direct access to S3 storages, meaning the agent will access the files directly without them being proxied by the main server. If the configuration property useS3Proxy is set to true, pre-signed URLs will be used for agents to read S3 objects. If it is set to false, or if it is a WRITE operation, AWS credentials will be sent to agents.

The type of AWS credentials being sent to the agents can be controlled by the configuration property s3CredentialType:

  • secretkey: The access key and the secret access key configured in the S3 storage URI will be sent to the agent.

  • temporary: The AWS Security Token Service (STS) will be used to generate temporary credentials to send to the agents. The duration of the credentials is controlled by stsCredentialDuration. You can set stsRegion to control in which region Vidispine server will call the AWS Security Token Service (STS) API.

  • none: No credentials will be sent to the agent. The agent then needs to rely on a local file, or an IAM role on the instance to access S3 objects.

There is also a configuration entry called s3CredentialType available in the agent.conf, that can be used to configure this behavior on a per-agent basis.

The final effective credential type will be the min of Server s3CredentialType and Agent s3CredentialType. And the order of the values is secretkey > temporary > none.

For example, no credentials will be sent to the agent, if an agent has the following configuration:


and the server has:

<property lastChange="2014-07-14T14:55:15.432+02:00">

For an older agent to work with 4.14 server, the credential type on the server side has to be set to either secretkey or none.

Agent properties

Configuration properties that can be used in the agent configuration file. Upon start, configuration is read from /etc/vidispine/agent.conf and any files in the directory /etc/vidispine/agent.conf.d.



The name the VSA is using to register itself. Optional but recommended. With a name set, the name can be used instead of UUID in vxa:// URIs.


Should always be VSA-VS.


The UUID of the VSA. Must be unique and follow the UUID syntax.



String that is used to signal to Vidispine server that all agents in the same group can reach each other.


The network address that the agent should accept connections on. If not set, is used.


The network port that the agent should listen on. Default 8090.


URI that the agent can be reached at. For example:,

connectionString, connectionString1, connectionString2

How the VSA should connect VidiCore. Generated by vidispine-agent-admin.

directVSURI, directVSURI1, directVSURI2

If VSA can connect directly to VidiCore (without secure tunnel), this is the URI to VidiCore (from VSA).


If VidiCore can connect directly to VSA (without secure tunnel), this is the URI to VSA (from VidiCore).


User name used to connect to VidiCore. Not recommended. Use vidispine-agent-admin to create a secure connection instead.


Password used to connect to VidiCore. Not recommended. Use vidispine-agent-admin to create a secure connection instead.


Proxy (http, socks4, socks5) used for SSH connection. Not required for new connections created by vidispine-agent-admin.


SSH fingerprint of SSH server on VidiCore side. Connection will fail if fingerPrint is set and no matching. Default is that connection is allowed, but a warning is emitted in the log file.


How often the VSA should contact VidiCore. Default is 4 seconds, but can be increased to lower traffic. Recommended: 60.


The number of threads that will be available to serve incoming requests. The selector runner will delegate the actual work that should be done to a worker thread.

New in version 5.3.


The number of worker threads that are available. These threads carry out the actual work in the VSA. For example they handle transfer jobs performed by the VSA. They typically also deliver results of requests sent to the VSA. However, see also transfer section below.

New in version 5.3.



Overall log level. Accepted values are ALL, TRACE, DEBUG, INFO (default), WARN, ERROR, FATAL, OFF.

logLevel.(class or package)

Class or package-specific logging.

Transfer jobs


Use multiple threads for a single transfer. Can speed up S3 transfers significantly. Default is 1 (single thread).

New in version 5.4.


Size of transfer chunk used in transfer jobs. Default is 10000000 (10 MB).

New in version 5.4.


If set, VSA will wait up to given number of seconds to appear in file listings before reporting the transfer as complete.


If set to true (which is the default), VSA verify a transfer by reading the first byte of the destination before reporting the transfer as complete.

Hash compute jobs


Use multiple threads for reading a file during hash computation. The actual computation is still done in one thread. Default is 1 (single thread).

New in version 5.4.

Transcoder jobs


The sets the maximum transcoder jobs the VSA will process. This is done by setting the maxJob element of the transcoder resource in VidiCore.


Controls if the VSA can access the input files directly. Note that there are two level of media access proxying for transcode jobs. VidiCore will proxy all access for the VSA which does not fit the directAccess filter, if the directAccess is set. VSA will proxy media access for the VidiCoder for URIs that are not http or file.


How the VSA reaches the transcoder. Should be 8888 unless the transcoder listens to another port.

Storage access


All S3 configurations listed in Storage and file are available as VSA configuration.


Maximum number of entries in FTP connection pool. Default is -1 (unlimited).


Maximum number of entries in FTP connection pool per key (scheme/host/port). Default is -1 (unlimited).


Keep at least this number of connections idle. Default is 0.


Time between when idle connections are checked for closing, in milliseconds. Default is 30000 (30 seconds).


The minimum time an connection is idle before it can be closed, in milliseconds. Default is 60000 (60 seconds).

Direct transfers between VSAs

New in version 5.0.

When Vidispine server copies or moves a file between two agent storages, the default is for Vidispine server to read the file from one agent and then write it to the other agent. In the case where the agents actually are able to reach each other, this is obviously quite inefficient, since the data is streamed through Vidispine server.

To let Vidispine server send a transfer job to the agent which hosts the source file, which then sends the file directly to the receiving agent. To enable this you configure both agents with the same value of the agent property agentGroup.

The destination URI, where the agent will try to send its file, will as default be the uri of the receiving agent, as seen at GET /vxa/(uuid). For example:

<VXADocument xmlns="">
    <name>Test agent</name>

However, in many cases that URI might not be a URI that the first agent can reach, for example if the agent is connected through SSH (then the URI typically is something like: http://localhost:5678/). To overcome this the agents can set the agent property externalUri to an URI that the agent can be reached at. This may be used in conjunction with the property: bindAddressV4 and/or bindAddressV6.


Two agents are one the same network and connect directly to Vidispine server, we only need to set agentGroup in each agents configuration file to the same value:


One agent is connecting to Vidispine server using SSH, we then need to set the externalUri property for that agent:


Port forwarding service

New in version 5.1.

It is possible to set up a port forward service, using the already existing connection to Vidispine, for the VSA. This will create a secure channel using remote forwarding. This is done by specifying an ID for the service and the URL and port that this service will try to reach. The agent needs to be configured as such; port.forward.<id>=<scheme>://<host>:<port> where <id> needs to be an integer. It is possible for a single VSA to have multiple port forwarding services enabled.

For example:


after the VSA have connected to Vidispine, the vxa resource will report:

GET /vxa HTTP/1.1
<VXAListDocument xmlns="">

The example above would be port forwarding for LDAP authentication.

New in version 21.3.

For a HTTP connection via VSA, it is recommended to use the VSA as a HTTP proxy instead of forwarding individual ports. For more information about this, see Proxying HTTP connection via a VSA.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.