diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/00_index/index.html b/00_index/index.html new file mode 100644 index 0000000..fa76213 --- /dev/null +++ b/00_index/index.html @@ -0,0 +1,188 @@ + + + + + + + + HiveMind Community Documentation - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

HiveMind Community Documentation

+

Welcome to the HiveMind Community Docs!

+

+

HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system.

+

With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS!

+

HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. +OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS, as your central unit.

+

Join Hivemind Matrix chat for general news, support and chit chat

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/01_quickstart/index.html b/01_quickstart/index.html new file mode 100644 index 0000000..cb7cc03 --- /dev/null +++ b/01_quickstart/index.html @@ -0,0 +1,268 @@ + + + + + + + + Quick start - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Quick Start Guide

+

This guide will help you get started quickly with the HiveMind platform, allowing you to extend your OpenVoiceOS (OVOS) ecosystem across multiple devices, even with low-resource hardware. HiveMind lets you connect lightweight devices as satellites to a central OVOS hub, offering centralized control and fine-grained permissions.

+

imagem

+

🚀 Installation

+

To begin using HiveMind Core, you need to install the hivemind-core package in your OVOS device. This can be done via pip:

+
pip install hivemind-core
+
+

🛰️ Adding a Satellite Device

+

Once the server is running, you'll need to add client credentials for each satellite device you want to connect.

+

Run the following command to add a satellite device:

+
hivemind-core add-client
+
+

The output wi*ll show you important details like:

+
    +
  • Node ID
  • +
  • Friendly Name
  • +
  • Access Key
  • +
  • Password
  • +
  • Encryption Key (deprecated, only used for legacy clients)
  • +
+

Provide these credentials on the client devices to enable the connection.

+

🖥️ Running the HiveMind Server

+

Start the HiveMind server to accept client connections on a specified port:

+
hivemind-core listen --port 5678
+
+

The server will now listen for incoming satellite connections.

+
+

💡 hivemind-core needs to be running in the same device as OVOS

+
+

🔑 Permissions

+

HiveMind Core uses a flexible permissions system, where each client's permissions are customizable. By default:

+
    +
  • +

    Only essential bus messages are allowed.

    +
  • +
  • +

    Skills and intents are accessible but can be blacklisted or restricted.

    +
  • +
+

You can manage permissions for clients by using commands like allow-msg, blacklist-msg, allow-skill, and blacklist-skill.

+

Example Use Cases:

+
    +
  • Basic AI Integration: Enable a simple client to send natural language instructions.
  • +
  • Custom Permissions: Restrict an IoT device to only communicate with specific message types, such as temperature.set.
  • +
+

HiveMind Core Commands Overview

+

Here are the basic commands for managing clients and their permissions:

+
    +
  • Add a new client:
  • +
+
hivemind-core add-client --name "satellite_1" --access-key "mykey123" --password "mypass"
+
+
    +
  • List all registered clients:
  • +
+
hivemind-core list-clients
+
+
    +
  • Start listening for client connections:
  • +
+
hivemind-core listen --port 5678
+
+

For detailed help on each command, use --help (e.g., hivemind-core add-client --help).

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/02_terminology/index.html b/02_terminology/index.html new file mode 100644 index 0000000..6f87a90 --- /dev/null +++ b/02_terminology/index.html @@ -0,0 +1,231 @@ + + + + + + + + Terminology - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Terminology

+

Before we delve into the depths of the Hivemind Protocol, let's familiarize ourselves with some key terms used within the ecosystem:

+
    +
  • Node: A device or software client that is part of to the Hivemind network.
  • +
+

img.png

+
    +
  • Mind: A node that actively listens for connections and understands natural language commands. Minds communicate via BUS messages, authenticate other nodes, isolate connections, and authorize individual messages
  • +
+

img_1.png

+
    +
  • +

    Fakecroft: A mind that imitates ovos-core without actually running it. often only handles a subset of BUS messages, usually only "speak" and "recognizer_loop:utterance"

    +
  • +
  • +

    Terminal: A user-facing node that connects to a mind but doesn't accept connections itself.

    +
  • +
+

img_3.png

+
    +
  • Bridge: A node that links an external service to a mind.
  • +
+

img_4.png

+
    +
  • Hive: A collection of interconnected nodes forming a collaborative network.
  • +
+

img_5.png

+
    +
  • Slave: A mind that connects to another mind and always accepts BUS messages from it.
  • +
+

img_2.png +NOTE: A Terminal is like a Slave, but it is NOT a Mind

+
    +
  • Master Mind: The highest-level node in a hive that is not connected to any other nodes but receives connections from other nodes.
  • +
+

img_6.png

+
    +
  • The Collective: The collection of all Master Minds in the world
  • +
+

img_7.png

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/03_pairing/index.html b/03_pairing/index.html new file mode 100644 index 0000000..afa2bc1 --- /dev/null +++ b/03_pairing/index.html @@ -0,0 +1,360 @@ + + + + + + + + Pairing - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Pairing devices

+

You can register clients in a Mind via command line or via audio

+

Command Line Pairing

+

First, you need to register the satellite devices in the HiveMind server

+
$ hivemind-core add-client
+Credentials added to database!
+
+Node ID: 2
+Friendly Name: HiveMind-Node-2
+Access Key: 5a9e580a2773a262cbb23fe9759881ff
+Password: 9b247ca66c7cd2b6388ad49ca504279d
+Encryption Key: 4185240103de0770
+WARNING: Encryption Key is deprecated, only use if your client does not support password
+
+

And then set the identity file in the satellite device

+
$ hivemind-client set-identity --key 5a9e580a2773a262cbb23fe9759881ff --password 9b247ca66c7cd2b6388ad49ca504279d --host 0.0.0.0 --port 5678 --siteid test
+identity saved: /home/miro/.config/hivemind/_identity.json
+
+

check the created identity file if you like

+
$ cat ~/.config/hivemind/_identity.json
+{
+    "password": "9b247ca66c7cd2b6388ad49ca504279d",
+    "access_key": "5a9e580a2773a262cbb23fe9759881ff",
+    "site_id": "test",
+    "default_port": 5678,
+    "default_master": "ws://0.0.0.0"
+}
+
+

test that a connection is possible using the identity file

+
$ hivemind-client test-identity
+(...)
+2024-05-20 21:22:28.003 - OVOS - hivemind_bus_client.client:__init__:112 - INFO - Session ID: 34d75c93-4e65-4ea9-b5f4-87169dcfda01
+(...)
+== Identity successfully connected to HiveMind!
+
+

If the identity test passed, then your satellite is paired with the Hive!

+

Audio Pairing via GGWave

+
+

🚧 This feature is a proof-of-concept / work-in-progress

+
+

Data over sound for HiveMind

+

hivemind-core and hivemind-voice-sat have hivemind-ggwave support

+

pre-requisites:

+
    +
  • +

    a device with a browser, eg a phone

    +
  • +
  • +

    a hivemind-core device with mic and speaker, eg a mark2

    +
  • +
  • +

    a (unpaired) voice satellite device with mic and speaker, eg a raspberry pi

    +
  • +
  • +

    all devices need to be in audible range, they each need to be able to listen to sounds emitted by each other

    +
  • +
+

workflow:

+
    +
  • +

    when launching hivemind-core take note of the provided code, eg HMPSWD:ce357a6b59f6b1f9

    +
  • +
  • +

    copy paste the code and emit it via ggwave (see below)

    +
  • +
  • +

    the voice satellite will decode the password, generate an access key and send it back via ggwave

    +
  • +
  • +

    master adds a client with key + password, send an ack (containing host) via ggwave

    +
  • +
  • +

    satellite devices get the ack then connect to received host

    +
  • +
+

img_9.png

+ + + + + + +

The Identity File

+

The identity file is a crucial component in the HiveMind ecosystem, as it stores the necessary credentials and settings for a node (device) to connect and communicate within the HiveMind network. This file ensures that the node can authenticate and maintain secure connections with other nodes.

+

While connection parameters can be set at launch time, this file provides a way to reuse them across the whole OS

+

Contents of the identity file

+

The identity file, typically located at ~/.config/hivemind/_identity.json, contains the following information:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
nameA human-readable label for the node, which is not guaranteed to be unique.
passwordThe password used to generate a session AES key for secure communication within the HiveMind network.
access_keyA unique access key assigned to the node for identification and authentication.
site_idAn identifier for the physical location or context in which the node is operating.
default_portThe default port number used to connect to the HiveMind core.
default_masterThe default host (address) of the HiveMind core that the node connects to.
public_keyThe ASCII-encoded public PGP key used for authenticating the node within the HiveMind network.
secret_keyThe path to the private PGP key file, which uniquely identifies the node and proves its identity.
+

By maintaining these details in the identity file, nodes can securely and efficiently participate in the HiveMind network, facilitating a seamless and secure distributed communication environment.

+

If a node needs to securely communicate or authenticate another (that isn't the master) it can do so via the public key. See the section for intercom messages for more details

+

Groups of devices can also be targeted via their site_id, for example, we can propagate a speak message to announce dinner is ready or broadcast a bus message to order all devices in a certain area equipped with a camera to take a picture.

+

Public Key

+

The Public Key in the identity file is part of a PGP key pair used to uniquely identify the node. This key serves several purposes:

+
    +
  1. +

    Unique Node Identification: The PGP key uniquely identifies this node within the HiveMind network, ensuring that each node can be distinctly recognized.

    +
  2. +
  3. +

    Inter-Node Authentication: Nodes can use the PGP key to authenticate each other, providing a layer of security that ensures only authorized nodes can communicate within the network.

    +
  4. +
  5. +

    Network Independence: The PGP key allows nodes to identify each other regardless of the specific HiveMind core (mind) they are connected to. This means that even if nodes switch Minds, they can still recognize and authenticate each other using their PGP keys.

    +
  6. +
+

Private Key

+

The Private Key is the only way for a node to read a message encrypted with it's corresponding public key. This file must be kept safe and private at all times!

+

By default, the private key is stored in ~/.config/hivemind/HiveMindComs.asc

+

If you believe your private key has been compromised, or simply want to change keys you can use the hivemind-client reset-pgp command

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/04_plugins/index.html b/04_plugins/index.html new file mode 100644 index 0000000..23fd34a --- /dev/null +++ b/04_plugins/index.html @@ -0,0 +1,545 @@ + + + + + + + + Plugins - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OVOS Plugins Compatibility

+

Hivemind leverages ovos-plugin-manager, bringing compatibility with hundreds of plugins.

+
+

💡 OVOS plugins can be used both on client and server side

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plugin TypeDescriptionDocumentation
MicrophoneCaptures voice inputMicrophone Documentation
VADVoice Activity DetectionVAD Documentation
WakeWordDetects wake words for interactionWakeWord Documentation
STTSpeech-to-text (STT)STT Documentation
TTSText-to-speech (TTS)TTS Documentation
G2PGrapheme-to-phoneme (G2P), used to simulate mouth movementsG2P Documentation
Media PlaybackEnables media playback (e.g., "play Metallica")Media Playback Documentation
OCP PluginsProvides playback support for URLs (e.g., YouTube)OCP Plugins Documentation
Audio TransformersProcesses audio before speech-to-text (STT)Audio Transformers Documentation
Dialog TransformersProcesses text before text-to-speech (TTS)Dialog Transformers Documentation
TTS TransformersProcesses audio after text-to-speech (TTS)TTS Transformers Documentation
PHALProvides platform-specific support (e.g., Mark 1)PHAL Documentation
+

Client side plugins

+

The tables below illustrates how plugins from the OVOS ecosystem relate to the various satellites and where they should +be installed and configured

+

Audio input:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsMicrophoneVADWake WordSTT
HiveMind Voice Satellite✔️
(Required)
✔️
(Required)
✔️
(Required *)
✔️
(Required)
HiveMind Voice Relay✔️
(Required)
✔️
(Required)
✔️
(Required)
📡
(Remote)
HiveMind Microphone Satellite✔️
(Required)
✔️
(Required)
📡
(Remote)
📡
(Remote)
+ +

Audio output:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsTTSMedia PlaybackOCP extractors
HiveMind Voice Satellite✔️
(Required)
✔️
(Optional)
✔️
(Optional)
HiveMind Voice Relay📡
(Remote)
✔️
(Optional)
✔️
(Optional)
HiveMind Microphone Satellite📡
(Remote)
✔️
(Optional)
✔️
(Optional)
+

Transformers:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsAudioUtteranceMetadataDialogTTS
HiveMind Voice Satellite✔️
(Optional)
✔️
(Optional)
✔️
(Optional)
✔️
(Optional)
✔️
(Optional)
HiveMind Voice Relay
(Unsupported)
🚧
(TODO)
🚧
(TODO)
🚧
(TODO)

(Unsupported)
HiveMind Microphone Satellite
(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)
+

Other:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsG2P
(mouth movements)
PHAL
HiveMind Voice Satellite✔️
(Optional)
✔️
(Optional)
HiveMind Voice Relay
(Unsupported)
✔️
(Optional)
HiveMind Microphone Satellite
(Unsupported)
✔️
(Optional)
+

Server side plugins

+

The tables below illustrates how plugins from the OVOS ecosystem relate to the various server setups and where they should +be installed and configured

+

Audio input:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsMicrophoneVADWake WordSTT
Hivemind Skills Server
(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)
Hivemind Sound Server✔️
(Required)
✔️
(Required)
✔️
(Required)
✔️
(Required)
Hivemind Persona Server
(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)
+

Audio output:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsTTSMedia PlaybackOCP extractors
Hivemind Skills Server
(Unsupported)

(Unsupported)
✔️
(Optional)
Hivemind Sound Server✔️
(Required)

(Unsupported)
✔️
(Optional)
Hivemind Persona Server
(Unsupported)

(Unsupported)

(Unsupported)
+

Transformers:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsAudioUtteranceMetadataDialogTTS
Hivemind Skills Server
(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)

(Unsupported)
Hivemind Sound Server🚧
(TODO)
✔️
(Optional)
✔️
(Optional)
✔️
(Optional)
🚧
(TODO)
Hivemind Persona Server
(Unsupported)
🚧
(TODO)

(Unsupported)
🚧
(TODO)

(Unsupported)
+

Other:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Supported PluginsG2P
(mouth movements)
PHAL
Hivemind Skills Server
(Unsupported)

(Unsupported)
Hivemind Sound Server
(Unsupported)

(Unsupported)
Hivemind Persona Server
(Unsupported)

(Unsupported)
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/04_protocol/index.html b/04_protocol/index.html new file mode 100644 index 0000000..ca342ef --- /dev/null +++ b/04_protocol/index.html @@ -0,0 +1,444 @@ + + + + + + + + Transport - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Protocol

+

The HiveMind Protocol enables seamless exchange of information and commands within a distributed network. It defines +message types and their handling methods, serving as a transport protocol. While the protocol primarily operates with +OpenVoiceOS (OVOS) messages, it is versatile enough to support other payloads.

+

The protocol is categorized into two main roles: Listener Protocol and Client Protocol.

+

Roles and Message Types

+

Listener Protocol

+
    +
  • Accepts: BUS, SHARED_BUS, PROPAGATE, ESCALATE, INTERCOM
  • +
  • Emits: BUS, PROPAGATE, BROADCAST, INTERCOM
  • +
+

Client Protocol

+
    +
  • Accepts: BUS, PROPAGATE, BROADCAST, INTERCOM
  • +
  • Emits: BUS, SHARED_BUS, PROPAGATE, ESCALATE, INTERCOM
  • +
+

Permissions

+

Permissions are based on a combination of:

+
    +
  • Access key
  • +
  • Allowed Message types
  • +
  • Blacklisted Intent types
  • +
  • Blacklisted Skill IDs
  • +
+
+

💡 Use the hivemind-core package to authorize message types or blacklist intents/skills.

+
+

Example: Allow the "speak" message type:

+
$ hivemind-core allow-msg "speak"
+
+
+

Payload Messages

+

Payload messages encapsulate OpenVoiceOS Message objects, acting as carriers for information or commands. These are +the "cargo" the HiveMind Protocol transports across the network.

+

Integrations with external AI backends require middleware to process OVOS messages. +See hivemind-persona for an example implementation.

+
+

⚠️ All HiveMind servers are expected to handle natural language queries. At a minimum, +the recognizer_loop:utterance OVOS message must be supported.

+

💡 Use the hivemind-websocket-client package to send a bus message from the command line

+
+

BUS Message

+
    +
  • Purpose: Single-hop communication between slaves and masters.
  • +
  • Behavior:
      +
    • A master receiving a BUS message checks global whitelists/blacklists and slave permissions.
    • +
    • Authorized messages are injected into the master's OVOS-core bus.
    • +
    • Direct responses from the master's OVOS-core are forwarded back to the originating slave.
    • +
    +
  • +
+

Command Line:

+
$ hivemind-client send-mycroft --help
+Usage: hivemind-client send-mycroft [OPTIONS]
+
+  send a single mycroft message
+
+Options:
+  --key TEXT       HiveMind access key (default read from identity file)
+  --password TEXT  HiveMind password (default read from identity file)
+  --host TEXT      HiveMind host (default read from identity file)
+  --port INTEGER   HiveMind port number (default: 5678)
+  --siteid TEXT    location identifier for message.context  (default read from
+                   identity file)
+  --msg TEXT       ovos message type to inject
+  --payload TEXT   ovos message.data json
+  --help           Show this message and exit.
+
+
+

💡 Valid payloads for OVOS can be found here

+
+

Visualization:

+

BUS Message Flow

+

SHARED_BUS Message

+
    +
  • Purpose: Passive monitoring of a slave device's OVOS-core bus.
  • +
  • Direction: Slave → Master.
  • +
  • Behavior:
      +
    • Requires explicit configuration on the slave device.
    • +
    • Similar to BUS, but for observation, not processing.
    • +
    +
  • +
+
+

💡 This feature is typically enabled through the HiveMind Skill.

+
+

Visualization:

+

Shared Bus Message Flow

+
+

INTERCOM Message

+

messages may also be encrypted with a node public_key, this ensures intermediate nodes are unable to read the message contents

+

A encrypted message is a regular hive message, but has the type "INTERCOM" and payload {"ciphertext": "XXXXXXX"}

+

Where "ciphertext" can only be decoded by the target Node, not by any intermediary

+

these messages are usually the payload of transport messages such as ESCALATE or PROPAGATE payloads.

+
+

💡 Intermediate nodes do not know the contents of the message, nor who the recipient is

+
+

When a message needs to be sent securely, it is encrypted using the recipient node's public PGP key. This ensures that only the intended recipient, who possesses the corresponding private PGP key, can decrypt the message.

+

After encryption, the message is signed with the sender's private PGP key. This provides authentication and integrity, ensuring that the message has not been tampered with and confirming the sender's identity.

+

Upon receiving an encrypted message, the recipient node attempts to decrypt it using its private PGP key. If successful, the message payload is then processed and emitted internally.

+

the target node public key needs to be known beforehand if you want to send secret messages

+
+

Transport Messages

+

Transport messages encapsulate another HiveMessage object as their payload. These types are particularly relevant +for Nested Hives.

+

BROADCAST Message

+
    +
  • Purpose: Multi-hop communication from master → slaves.
  • +
  • Behavior:
      +
    • Disseminates messages to all connected slaves.
    • +
    • Supports target_site_id for directing messages to specific nodes.
    • +
    +
  • +
+

Example: A master can make all slaves in site_id: "kitchen" speak a specific message.

+
+

💡 BROADCAST messages are typically sent by skills running in a hivemind server

+
+

Visualization:

+

Broadcast Message Flow

+

ESCALATE Message

+
    +
  • Purpose: Multi-hop communication from slave → master.
  • +
  • Behavior:
      +
    • Elevates messages up the authority chain for higher-level processing.
    • +
    +
  • +
+

Visualization:

+

Escalate Message Flow

+

Command Line:

+
$ hivemind-client escalate --help
+Usage: hivemind-client escalate [OPTIONS]
+
+  escalate a single mycroft message
+
+Options:
+  --key TEXT       HiveMind access key (default read from identity file)
+  --password TEXT  HiveMind password (default read from identity file)
+  --host TEXT      HiveMind host (default read from identity file)
+  --port INTEGER   HiveMind port number (default: 5678)
+  --siteid TEXT    location identifier for message.context  (default read from
+                   identity file)
+  --msg TEXT       ovos message type to inject
+  --payload TEXT   ovos message.data json
+  --help           Show this message and exit.
+
+
+

PROPAGATE Message

+
    +
  • Purpose: Multi-hop communication in both directions (master ↔ slaves).
  • +
  • Behavior:
      +
    • Ensures the message is delivered to all relevant nodes.
    • +
    +
  • +
+

Visualization:

+

Propagate Message Flow

+

Command Line:

+
$ hivemind-client propagate --help
+Usage: hivemind-client propagate [OPTIONS]
+
+  propagate a single mycroft message
+
+Options:
+  --key TEXT       HiveMind access key (default read from identity file)
+  --password TEXT  HiveMind password (default read from identity file)
+  --host TEXT      HiveMind host (default read from identity file)
+  --port INTEGER   HiveMind port number (default: 5678)
+  --siteid TEXT    location identifier for message.context  (default read from
+                   identity file)
+  --msg TEXT       ovos message type to inject
+  --payload TEXT   ovos message.data json
+  --help           Show this message and exit.
+
+
+

Protocol Features

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureProtocol v0Protocol v1
JSON serialization
Binary serialization
Pre-shared AES key
Password handshake
PGP handshake
Zlib compression
+
+

⚠️ Protocol v0 is deprecated! However some clients (e.g., HiveMind-Js) may not yet support Protocol Version 1.

+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/05_presence/index.html b/05_presence/index.html new file mode 100644 index 0000000..bd86eb9 --- /dev/null +++ b/05_presence/index.html @@ -0,0 +1,244 @@ + + + + + + + + Auto Discovery - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Auto Discovery

+

Hivemind-presence is an utility to enable auto discovery of +HiveMind nodes in your network

+

Command line usage

+
$ hivemind-presence --help
+Usage: hivemind-presence [OPTIONS] COMMAND [ARGS]...
+
+Options:
+  --help  Show this message and exit.
+
+Commands:
+  announce  Advertise node in the local network
+  scan      scan for hivemind nodes in the local network
+
+

Announce your HiveMind node in your lan via UpnP and Zeroconf

+
$ hivemind-presence announce --help
+Usage: hivemind-presence announce [OPTIONS]
+
+  Advertise node in the local network
+
+Options:
+  --port INTEGER       HiveMind port number (default: 5678)
+  --name TEXT          friendly device name (default: HiveMind-Node)
+  --service-type TEXT  HiveMind service type (default: HiveMind-websocket)
+  --zeroconf BOOLEAN   advertise via zeroconf (default: True)
+  --upnp BOOLEAN       advertise via UPNP (default: False)
+  --help               Show this message and exit.
+
+
+

Scan for HiveMind nodes in your lan via UpnP and Zeroconf

+
$ hivemind-presence scan --help
+Usage: hivemind-presence scan [OPTIONS]
+
+  scan for hivemind nodes in the local network
+
+Options:
+  --zeroconf BOOLEAN   scan via zeroconf (default: True)
+  --upnp BOOLEAN       scan via UPNP (default: False)
+  --service-type TEXT  HiveMind service type (default: HiveMind-websocket)
+  --help               Show this message and exit.
+
+
$ hivemind-presence scan
+            HiveMind Nodes            
+┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━┓
+┃ Friendly Name ┃ Host         ┃ Port ┃
+┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━┩
+│   living_room │ 192.168.1.9  │ 5678 │
+│       kitchen │ 192.168.1.13 │ 5678 │
+└───────────────┴──────────────┴──────┘
+
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/06_skills_server/index.html b/06_skills_server/index.html new file mode 100644 index 0000000..3b8e973 --- /dev/null +++ b/06_skills_server/index.html @@ -0,0 +1,289 @@ + + + + + + + + Skills Server - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Skills Server

+

Hivemind-core is the reference integrations with OpenVoiceOS

+

img_11.png

+
+

💡 For a minimal install you only need hivemind-core, ovos-core and ovos-messagebus

+
+

Install

+
pip install hivemind-core
+
+

Usage

+

Everything is done via the hivemind-core command, see pairing for more info

+
$ hivemind-core --help
+Usage: hivemind-core [OPTIONS] COMMAND [ARGS]...
+
+Options:
+  --help  Show this message and exit.
+
+Commands:
+  add-client     add credentials for a client
+  allow-msg      allow message types sent from a client
+  delete-client  remove credentials for a client
+  list-clients   list clients and credentials
+  listen         start listening for HiveMind connections
+
+
$ hivemind-core listen --help
+Usage: hivemind-core listen [OPTIONS]
+
+  start listening for HiveMind connections
+
+Options:
+  --host TEXT       HiveMind host
+  --port INTEGER    HiveMind port number
+  --ssl BOOLEAN     use wss://
+  --cert_dir TEXT   HiveMind SSL certificate directory
+  --cert_name TEXT  HiveMind SSL certificate file name
+  --db-backend [redis|json|sqlite]
+                                  Select the database backend to use. Options:
+                                  redis, sqlite, json.
+  --db-name TEXT                  [json/sqlite] The name for the database
+                                  file. ~/.cache/hivemind-core/{name}
+  --db-folder TEXT                [json/sqlite] The subfolder where database
+                                  files are stored. ~/.cache/{db_folder}}
+  --redis-host TEXT               [redis] Host for Redis. Default is
+                                  localhost.
+  --redis-port INTEGER            [redis] Port for Redis. Default is 6379.
+  --redis-password TEXT           [redis] Password for Redis. Default None
+  --help            Show this message and exit.
+
+
+

Why HiveMind?

+

HiveMind offers a decentralized solution for OVOS, with features such as secure communication, device integration, and protocol transparency. Here's what it brings to the table:

+
    +
  • +

    HiveMind as an OVOS Add-on
    + Start with OVOS by installing ovos-core, or use a Mycroft device. Then, run hivemind-core to enable HiveMind functionality. This transforms your OVOS node into a connected system with the "brain" of HiveMind.

    +
  • +
  • +

    Decentralizing OVOS-Core
    + With HiveMind, thin clients like the voice satellite can connect without running full OVOS software. This allows for multiple access points (e.g., microphones across your home) while keeping the core in a central location.

    +
  • +
  • +

    Encrypted Communication
    + HiveMind supports SSL-encrypted communication, eliminating the need for manual certificate management. It auto-generates self-signed certificates for secure, encrypted connections between devices.

    +
  • +
  • +

    MessageBus Authentication & Security
    + HiveMind enforces authentication for the message bus, ensuring only authorized clients can connect. This enhances privacy and prevents unauthorized access, unlike traditional setups where the message bus is open.

    +
  • +
  • +

    Exposing OVOS to the Web Safely
    + HiveMind can expose your OVOS instance securely over the web. By using the Flask chatroom template, you can interact with OVOS remotely while maintaining privacy and security.

    +
  • +
  • +

    Protocol for Integration
    + HiveMind allows integration with external platforms like Android, Mattermost, or Twitch. Whether you want to turn OVOS into a chatbot or integrate it with other services, HiveMind provides the protocol for seamless interaction.

    +
  • +
+
+

Key Features & Setup

+
    +
  • HiveMind in Action:
  • +
  • Devices Connecting: Install the HiveMind CLI and register with your OVOS node to connect devices across your network.
  • +
  • Decentralization: Use lightweight devices like Raspberry Pi with HiveMind to extend OVOS functionality across rooms.
  • +
  • Encryption & Authentication: Safely transmit data over SSL, with built-in encryption and message authentication.
  • +
  • Web Exposure: Use HiveMind’s secure web interface to interact with OVOS remotely.
  • +
  • Chat Integrations: Install bridges like the HackChat or Mattermost bridges to bring OVOS to chat platforms.
  • +
+

By leveraging HiveMind's features, you can transform OVOS into a flexible, decentralized, and secure platform, capable of handling a wide variety of use cases and integrations.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/06_sound_server/index.html b/06_sound_server/index.html new file mode 100644 index 0000000..8b9eb2d --- /dev/null +++ b/06_sound_server/index.html @@ -0,0 +1,285 @@ + + + + + + + + Sound Server - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind Sound Server

+

hivemind-listener extends hivemind-core and integrates with ovos-simple-listener, enabling audio-based communication with advanced features for secure, distributed voice assistant functionality.

+
+

💡 If you are running a home server this is the best option, you only need to install hivemind-listener, ovos-core and ovos-messagebus.

+

⚠️ If running on a device that is also a full OVOS assistant by itself you should use hivemind-core instead

+
+

Key Features of HiveMind Listener

+
    +
  • +

    Audio Stream Handling:
    + Accepts encrypted binary audio streams, performing WakeWord detection, Voice Activity Detection (VAD), Speech-to-Text (STT), and Text-to-Speech (TTS) directly on the hivemind-listener instance.
    +(Lightweight clients like hivemind-mic-satellite only run a microphone and VAD plugin.)

    +
  • +
  • +

    STT Service:
    + Provides STT via the hivemind-websocket-client, accepting Base64-encoded audio inputs.

    +
  • +
  • +

    TTS Service:
    + Provides TTS via the hivemind-websocket-client, returning Base64-encoded audio outputs.

    +
  • +
  • +

    Secure Plugin Access:
    + Running TTS/STT via HiveMind Listener requires an access key, offering fine-grained access control compared to non-authenticated server plugins.

    +
  • +
+

Usage

+
    +
  1. Install HiveMind Listener:
  2. +
+
pip install hivemind-listener
+
+
    +
  1. Start the HiveMind Listener:
  2. +
+
$ hivemind-listener --help
+Usage: hivemind-listener [OPTIONS]
+
+  Run the HiveMind Listener with configurable plugins.
+
+  If a plugin is not specified, the defaults from mycroft.conf will be used.
+  mycroft.conf will be loaded as usual for plugin settings.
+
+Options:
+  --wakeword TEXT                 Specify the wake word for the listener.
+                                  Default is 'hey_mycroft'.
+  --stt-plugin TEXT               Specify the STT plugin to use.
+  --tts-plugin TEXT               Specify the TTS plugin to use.
+  --vad-plugin TEXT               Specify the VAD plugin to use.
+  --dialog-transformers TEXT      dialog transformer plugins to load.
+                                  Installed plugins: None
+  --utterance-transformers TEXT   utterance transformer plugins to load. 
+                                  Installed plugins: ['ovos-utterance-plugin-cancel']
+  --metadata-transformers TEXT    metadata transformer plugins to
+                                  load. Installed plugins: None
+  --ovos_bus_address TEXT         Open Voice OS bus address
+  --ovos_bus_port INTEGER         Open Voice OS bus port number
+  --host TEXT                     HiveMind host
+  --port INTEGER                  HiveMind port number
+  --ssl BOOLEAN                   use wss://
+  --cert_dir TEXT                 HiveMind SSL certificate directory
+  --cert_name TEXT                HiveMind SSL certificate file name
+  --db-backend [redis|json|sqlite]
+                                  Select the database backend to use. Options:
+                                  redis, sqlite, json.
+  --db-name TEXT                  [json/sqlite] The name for the database
+                                  file. ~/.cache/hivemind-core/{name}
+  --db-folder TEXT                [json/sqlite] The subfolder where database
+                                  files are stored. ~/.cache/{db_folder}}
+  --redis-host TEXT               [redis] Host for Redis. Default is
+                                  localhost.
+  --redis-port INTEGER            [redis] Port for Redis. Default is 6379.
+  --redis-password TEXT           [redis] Password for Redis. Default None
+  --help                          Show this message and exit.
+
+

This command will run the HiveMind Listener, with configurable plugins for wakeword detection, STT, TTS, and VAD, as well as access control via SSL.

+
+

Example Use Cases

+
    +
  1. +

    Microphone Satellite:
    + Use hivemind-mic-satellite to stream raw audio to the hivemind-listener. Microphones handle audio capture and VAD, while the Listener manages WakeWord, STT, and TTS processing.

    +
  2. +
  3. +

    Authenticated STT/TTS Services:
    + Connect clients securely using access keys for transcribing or synthesizing audio via the HiveMind Listener, ensuring robust access control.

    +
  4. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/07_homeassistant/index.html b/07_homeassistant/index.html new file mode 100644 index 0000000..6472a8e --- /dev/null +++ b/07_homeassistant/index.html @@ -0,0 +1,257 @@ + + + + + + + + Home Assistant - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HomeAssistant Integration

+
+

⚠️ UNMAINTAINED: currently not functional

+
+

img_20.png

+

This component will set up the following actions.

+ + + + + + + + + + + + + +
PlatformDescription
notifySend a notification to a HiveMind Voice Assistant
+

Install

+

Using HACS install from Github

+

HiveMind setup

+

create credentials and allow the speak message to be injected hivemind side

+

you can authorize message_types via the hivemind-core package

+
$hivemind-core allow-msg "speak"
+
+

imagem

+

Configuration

+

In configuration.yaml:

+

+notify:
+  - platform: hivemind
+    name: mark2
+    host: wss://192.168.1.8
+    port: 5678
+    self_signed: True
+    key: a821bc4b34590a002570f0ed7808b886
+    password: 31e6a37e472a657609ef33f237d5b286
+
+

Then call notify.mark2 with a message you'd like the HiveMind Voice Assistant to speak.

+

Manual Installation

+
    +
  1. Using the tool of choice open the directory (folder) for your HA configuration (where you find configuration.yaml).
  2. +
  3. If you do not have a custom_components directory (folder) there, you need to create it.
  4. +
  5. In the custom_components directory (folder) create a new folder called hivemind.
  6. +
  7. Download all the files from the custom_components/hivemind/ directory (folder) in this repository.
  8. +
  9. Place the files you downloaded in the new directory (folder) you created.
  10. +
  11. Restart Home Assistant
  12. +
  13. In the HA UI go to "Configuration" -> "Integrations" click "+" and search for "HiveMind Integration"
  14. +
+

Using your HA configuration directory (folder) as a starting point you should now also have this:

+
custom_components/hivemind/translations/en.json
+custom_components/hivemind/__init__.py
+custom_components/hivemind/const.py
+custom_components/hivemind/manifest.json
+custom_components/hivemind/notify.py
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/07_micsat/index.html b/07_micsat/index.html new file mode 100644 index 0000000..60b14e3 --- /dev/null +++ b/07_micsat/index.html @@ -0,0 +1,273 @@ + + + + + + + + Mic Satellite - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind Microphone Satellite

+

OpenVoiceOS Microphone Satellite, connect to HiveMind Listener

+

A super lightweight version of voice-satellite, only Microphone and VAD plugins runs on the mic-satellite, voice activity is streamed to hivemind-listener and all the processing happens there

+

Server requirements

+
+

⚠️ hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities.

+
+

Install

+

Install with pip

+
$ pip install hivemind-mic-satellite
+
+

Configuration

+

Voice relay is built on top of ovos-plugin-manager, it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf

+

Supported plugins:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plugin TypeDescriptionRequiredLink
MicrophoneCaptures voice inputYesMicrophone
VADVoice Activity DetectionYesVAD
PHALPlatform/Hardware Abstraction LayerNoPHAL
G2PGenerate visemes (mouth movements), eg. for Mk1NoG2P
Media Playback PluginsEnables media playback (e.g., "play Metallica")NoMedia Playback Plugins
OCP PluginsProvides playback support for URLs (e.g., YouTube)NoOCP Plugins
+

The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins

+

This repo needs less resources but it is also missing some features

+
    +
  • STT plugin (runs on server)
  • +
  • TTS plugin (runs on server)
  • +
  • WakeWord plugin (runs on server)
  • +
  • Continuous Listening
  • +
  • Hybrid Listening
  • +
  • Recording Mode
  • +
  • Sleep Mode
  • +
  • Multiple WakeWords
  • +
  • Audio Transformers plugins
  • +
  • Dialog Transformers plugins (TODO - support in the future)
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/07_voice_relay/index.html b/07_voice_relay/index.html new file mode 100644 index 0000000..113d7ba --- /dev/null +++ b/07_voice_relay/index.html @@ -0,0 +1,295 @@ + + + + + + + + Voice Relay - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind Voice Relay

+

OpenVoiceOS Relay, connect to HiveMind

+

A lightweight version of voice-satellite, but STT and TTS are sent to HiveMind instead of handled on device

+

Server requirements

+
+

⚠️ hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities.

+

Alternatively run hivemind-core together with ovos-audio and ovos-dinkum-listener

+
+

The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins

+

The voice relay is built on top of ovos-simple-listener, while it needs less resources it is also missing some features

+
    +
  • STT plugin
  • +
  • TTS plugin
  • +
  • Audio Transformers plugins
  • +
  • Continuous Listening
  • +
  • Hybrid Listening
  • +
  • Recording Mode
  • +
  • Sleep Mode
  • +
  • Multiple WakeWords
  • +
+

If you need an even lighter implementation, consider hivemind-mic-satellite to also offload wake word to the server

+

Install

+

Install with pip

+
$ pip install HiveMind-voice-relay
+
+

Usage

+
Usage: hivemind-voice-relay [OPTIONS]
+
+  connect to HiveMind
+
+Options:
+  --host TEXT      hivemind host
+  --key TEXT       Access Key
+  --password TEXT  Password for key derivation
+  --port INTEGER   HiveMind port number
+  --selfsigned     accept self signed certificates
+  --help           Show this message and exit.
+
+
+

Configuration

+

Voice relay is built on top of ovos-simple-listener and ovos-audio, it uses the default OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf

+

Supported plugin types:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plugin TypeDescriptionRequiredLink
MicrophoneCaptures voice inputYesMicrophone
VADVoice Activity DetectionYesVAD
WakeWordDetects wake words for interactionYesWakeWord
G2Pgrapheme-to-phoneme (G2P), used to simulate mouth movementsNoG2P
Media Playback PluginsEnables media playback (e.g., "play Metallica")NoMedia Playback Plugins
OCP PluginsProvides playback support for URLs (e.g., YouTube)NoOCP Plugins
PHALProvides platform-specific support (e.g., Mark 1)NoPHAL
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/07_voicesat/index.html b/07_voicesat/index.html new file mode 100644 index 0000000..8e379aa --- /dev/null +++ b/07_voicesat/index.html @@ -0,0 +1,313 @@ + + + + + + + + Voice Satellite - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind Voice Satellite

+

OpenVoiceOS Satellite, connect to HiveMind

+

Built on top of ovos-dinkum-listener, ovos-audio and PHAL

+

img_19.png

+

Install

+

Install dependencies (if needed)

+
sudo apt-get install -y libpulse-dev libasound2-dev
+
+

Install with pip

+
$ pip install HiveMind-voice-sat
+
+

Usage

+
Usage: hivemind-voice-sat [OPTIONS]
+
+  connect to HiveMind
+
+Options:
+  --host TEXT      hivemind host
+  --key TEXT       Access Key
+  --password TEXT  Password for key derivation
+  --port INTEGER   HiveMind port number
+  --selfsigned     accept self signed certificates
+  --help           Show this message and exit.
+
+
+

Configuration

+

Voice satellite is built on top of ovos-listener and ovos-audio, it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf

+

Supported plugins:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plugin TypeDescriptionRequiredLink
MicrophoneCaptures voice inputYesMicrophone
VADVoice Activity DetectionYesVAD
WakeWordDetects wake words for interactionYes*WakeWord
STTspeech-to-text (STT)YesSTT
TTStext-to-speech (TTS)YesTTS
G2Pgrapheme-to-phoneme (G2P), used to simulate mouth movementsNoG2P
Media Playback PluginsEnables media playback (e.g., "play Metallica")NoMedia Playback Plugins
OCP PluginsProvides playback support for URLs (e.g., YouTube)NoOCP Plugins
Audio TransformersProcesses audio before speech-to-text (STT)NoAudio Transformers
Dialog TransformersProcesses text before text-to-speech (TTS)NoDialog Transformers
TTS TransformersProcesses audio after text-to-speech (TTS)NoTTS Transformers
PHALProvides platform-specific support (e.g., Mark 1)NoPHAL
+ +

You can optimize your voice satellite for a variety of platforms by selecting different plugin combinations

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/08_persona/index.html b/08_persona/index.html new file mode 100644 index 0000000..59273a9 --- /dev/null +++ b/08_persona/index.html @@ -0,0 +1,222 @@ + + + + + + + + Persona Server - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Persona

+

this is a hivemind Master node, but it is running ovos-persona instead of connecting to ovos-core

+

you can use this to expose chatbots and LLMs via hivemind, satellites made for hivemind-core should be compatible

+
+

⚠️ Satellites made specifically for hivemind-listener (Sound server) will not work with hivemind-persona!

+
+

img_13.png

+

Install

+
pip install hivemind-persona
+
+

ChatGPT

+

Install the OpenAI solver

+

create a chatgpt.json

+
{
+"name": "ChatGPT",
+"solvers": [
+    "ovos-solver-openai-persona-plugin"
+],
+"ovos-solver-openai-persona-plugin": {
+    "api_url": "<your_local_LocalAI_server_url>",
+    "key": "<your_OpenAI_key>",
+    "persona": "helpful, creative, clever, and very friendly."
+}
+}
+
+

launch hivemind-persona with the created file

+

hivemind-persona --persona chatgpt.json

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/09_matrix/index.html b/09_matrix/index.html new file mode 100644 index 0000000..214d60b --- /dev/null +++ b/09_matrix/index.html @@ -0,0 +1,230 @@ + + + + + + + + Matrix - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind - Matrix bridge

+

What is it?

+

Matrix is a chat protocol, it works a little like email, but instantaneous and secure:

+
    +
  • +

    You need to register an account at a provider

    +
  • +
  • +

    Whatever your provider is, you can talk to people using other providers

    +
  • +
  • +

    In the same way you can use Outlook or Thunderbird with the same email account, you can use different Matrix apps for the same Matrix account.

    +
  • +
+

img_16.png

+

Install

+

Install from Github

+

Usage

+

imagem

+
Usage: HiveMind-matrix run [OPTIONS]
+
+  connect a matrix chatroom to hivemind
+
+Options:
+  --botname TEXT      thehivebot
+  --matrixtoken TEXT  syt_dGhl.....
+  --matrixhost TEXT   https://matrix.org
+  --room TEXT         #hivemind-bots:matrix.org
+  --key TEXT          HiveMind access key (default read from identity file)
+  --password TEXT     HiveMind password (default read from identity file)
+  --host TEXT         HiveMind host (default read from identity file)
+  --port INTEGER      HiveMind port number (default: 5678)
+  --help              Show this message and exit.
+
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/10_deltachat/index.html b/10_deltachat/index.html new file mode 100644 index 0000000..bcf0cfa --- /dev/null +++ b/10_deltachat/index.html @@ -0,0 +1,222 @@ + + + + + + + + DeltaChat - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

DeltaChat Bridge

+

Delta Chat is a messaging app that works over e-mail

+

End-to-End Encryption using Autocrypt and CounterMITM protocols, with multiple security audits.

+

img_12.png

+

Install

+
$ pip install HiveMind-deltachat-bridge
+
+

Usage

+

img.png

+
$ hm-deltachat-bridge --help
+
+usage: __main__.py [-h] --access_key ACCESS_KEY --email EMAIL --password
+                   PASSWORD [--crypto_key CRYPTO_KEY] [--name NAME]
+                   [--host HOST] [--port PORT]
+
+optional arguments:
+  -h, --help            show this help message and exit
+  --access_key ACCESS_KEY
+                        hivemind access key
+  --email EMAIL         deltachat email
+  --password PASSWORD   deltachat password
+  --crypto_key CRYPTO_KEY
+                        payload encryption key
+  --name NAME           human readable device name
+  --host HOST           HiveMind host
+  --port PORT           HiveMind port number
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/11_devs/index.html b/11_devs/index.html new file mode 100644 index 0000000..fc007ba --- /dev/null +++ b/11_devs/index.html @@ -0,0 +1,199 @@ + + + + + + + + Libraries - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind makes heavy use of OVOS technology as imported libraries, additional client libraries are provided that implement the HiveMind protocol

+

Libraries

+

If you are implementing a client from scratch, these libraries might be of interest

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/12_handshake/index.html b/12_handshake/index.html new file mode 100644 index 0000000..e358962 --- /dev/null +++ b/12_handshake/index.html @@ -0,0 +1,477 @@ + + + + + + + + Handshake - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Handshake Protocol

+

This document provides an overview of the handshake protocol used in the HiveMind system, detailing how handshakes are initiated and processed from both the client (slave) and server (master) perspectives.

+

The handshake process establishes a secure connection between a HiveMind master and its slaves. It ensures authentication, optionally using passwords or public/private key pairs, and sets up cryptographic keys for secure communication.

+

For detailed code and various usage examples, you can refer to the Poorman Handshake GitHub Repository.

+
+

Handshake Types

+

Password-Based Handshake:

+
    +
  • +

    Utilizes a shared password for authentication.

    +
  • +
  • +

    Requires both client and server to know the password beforehand.

    +
  • +
+

Public Key Handshake:

+
    +
  • +

    Based on public/private key pairs.

    +
  • +
  • +

    The server provides a public key to the client, and the client verifies the server's authenticity.

    +
  • +
  • +

    Supports implicit trust for first-time connections (when no public key is available).

    +
  • +
  • +

    Uses asymmetric encryption (RSA, for example) to ensure that communication is secure and cannot be intercepted or modified.

    +
  • +
  • +

    Encrypts the symmetric session key to allow further communication using the shared key.

    +
  • +
+
+

Workflow: Server Perspective

+

HELLO Message:

+
    +
  • +

    The server sends a HELLO message to the client containing:

    +
      +
    • +

      Public key (pubkey) for key-based handshake.

      +
    • +
    • +

      Node ID (node_id) for identification.

      +
    • +
    • +

      Optional session_id for session-based communication.

      +
    • +
    +
  • +
+

HANDSHAKE Request:

+
    +
  • +

    The server initiates the handshake by sending a HANDSHAKE message:

    +
      +
    • +

      Specifies whether to use password-based or public-key-based authentication.

      +
    • +
    • +

      Includes optional fields like:

      +
    • +
    • +

      crypto_key: A flag indicating whether a pre-shared cryptographic key is available for use in the handshake (but not the key itself).

      +
    • +
    • +

      binarize: Flag for binary protocol support.

      +
    • +
    • +

      password: Indicator for password-based handshake.

      +
    • +
    +
  • +
+

Validate Client's Response:

+
    +
  • +

    If the client provides an envelope:

    +
      +
    • +

      Validate the client's response using the shared password or public key.

      +
    • +
    • +

      Update the cryptographic key for secure communication.

      +
    • +
    +
  • +
  • +

    If the crypto_key flag is set or the client doesnt answer the handshake, use the pre-shared cryptographic key directly, skipping the handshake step.

    +
  • +
+
+

Workflow: Client Perspective

+

Receive HELLO Message:

+
    +
  • +

    Extract the server's public key and node ID from the HELLO message.

    +
  • +
  • +

    Store the session ID if provided.

    +
  • +
+

Start Handshake:

+
    +
  • +

    Determine the handshake type based on the server's HANDSHAKE request:

    +
      +
    • +

      Password-based handshake:

      +
    • +
    • +

      Generate an envelope using the shared password.

      +
    • +
    • +

      Public-key-based handshake:

      +
    • +
    • +

      Verify the server's public key (if available).

      +
    • +
    • +

      Generate and send an envelope for authentication.

      +
    • +
    +
  • +
+

Handle Validation:

+
    +
  • +

    If the server sends an envelope for validation:

    +
      +
    • +

      Verify the server's authenticity using the shared password or public key.

      +
    • +
    • +

      Update the cryptographic key for secure communication.

      +
    • +
    +
  • +
+
+

Handshake Message Structure

+

HELLO Message

+
    +
  • From Server:
  • +
+
{
+"type": "HELLO",
+"payload": {
+  "pubkey": "<server_public_key>",
+  "node_id": "<server_node_id>",
+  "session_id": "<session_id (optional)>"
+}
+}
+
+

HANDSHAKE Message

+
    +
  • From Server:
  • +
+
{
+"type": "HANDSHAKE",
+"payload": {
+  "password": "<bool>",
+  "crypto_key": "<bool (flag indicating availability of pre-shared key)>",
+  "binarize": "<bool>",
+  "envelope": "<handshake_envelope (if client has started)>"
+}
+}
+
+
    +
  • From Client:
  • +
+
{
+"type": "HANDSHAKE",
+"payload": {
+  "pubkey": "<client_public_key (if using pubkey)>",
+  "envelope": "<handshake_envelope>",
+  "binarize": "<bool>",
+  "session": "<session_data>",
+  "site_id": "<client_site_id>"
+}
+}
+
+
+

Key Functions and Responsibilities

+

Server

+

Start Handshake:

+
    +
  • Ensure the client is authorized to join the HiveMind network.
  • +
+

Broadcast Key:

+
    +
  • Send the server's public key for public-key-based handshakes.
  • +
+

Verify Envelope:

+
    +
  • Authenticate the client using the received envelope and establish the shared cryptographic key.
  • +
+

Client

+

Generate Envelope:

+
    +
  • Create an envelope for authentication based on the handshake type.
  • +
+

Verify Server:

+
    +
  • Use the public key to verify the server's authenticity.
  • +
+

Update Session:

+
    +
  • Store the server-provided session ID and synchronize it with local sessions.
  • +
+
+

Secure Communication After Handshake

+

Upon successful handshake:

+
    +
  1. +

    A shared cryptographic key is established between the server and the client.

    +
  2. +
  3. +

    All further communication between the server and client is encrypted using this symmetric key (e.g., AES-256).

    +
  4. +
  5. +

    The session ID ensures continuity and identification in multi-session environments.

    +
  6. +
+

This guarantees that all data exchanged between the server and the client is protected, even if intercepted by a third party.

+
+

Error Handling

+

Illegal Messages:

+
    +
  • Messages not adhering to the protocol are logged, and the connection may be terminated.
  • +
+

Handshake Failures:

+
    +
  • Authentication failures result in handshake termination and rejection of the connection.
  • +
+
+

Example Scenarios

+

First-Time Connection (Implicit Trust)

+
    +
  1. Server sends HELLO with its public key.
  2. +
  3. Client trusts the server and starts the handshake.
  4. +
  5. A shared cryptographic key is established for encrypted communication.
  6. +
+

Reconnection with Password

+
    +
  1. Server requests a password-based handshake.
  2. +
  3. Client generates an envelope using the shared password.
  4. +
  5. Server validates the envelope and establishes a secure session.
  6. +
+
+

For detailed code and various usage examples, please refer to the Poorman Handshake GitHub Repository.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/13_mycroft/index.html b/13_mycroft/index.html new file mode 100644 index 0000000..7c1cdaa --- /dev/null +++ b/13_mycroft/index.html @@ -0,0 +1,283 @@ + + + + + + + + OVOS Messages - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Messages

+

The OpenVoiceOS messagebus is considered an internal and private websocket for minds, clients do not connect directly to it.

+

A mind will inject its own context about the originating clients, only responses to the client message will be forwarded, this provides client isolation.

+

A mind will filter incoming and outgoing messages per client, the permissions model of the hivemind is extensive e.g. it might refuse utterances based on the intent

+

This info applies to ovos-core, Hivemind depends on this functionality but it is not part of the hivemind itself. HiveMind responsibility is only to deliver the BUS messages

+

From the POV of the Hivemind you can replace ovos-core with anything as long as you respect the mechanisms below

+ +

Message

+

A OpenVoiceOS message consists of a json payload, it contains a type , some data and a context.

+

The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty.

+

You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data.

+

Targeting Theory

+

ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for.

+

the Message object provides the following methods:

+
    +
  • message.forward method, keeps previous context.
      +
    • message continues going to same destination
    • +
    +
  • +
  • message.reply method swaps "source" with "destination"
      +
    • message goes back to source
    • +
    +
  • +
+

The context destination parameter in the original message can be set to a list with any number of intended targets:

+
bus.emit(Message('recognizer_loop:utterance', data, 
+                 context={'destination': ['audio', 'kde'],
+                          'source': "remote_service"))
+
+

Sources

+

ovos-core injects the context when it emits an utterance, this can be either typed or spoken via OVOS STT service

+

STT will identify itself as audio

+

mycroft.conf defines a list of "native_sources", by default only audio is a native source

+

Destinations

+

Output capable services are ovos-audio (TTS, music...)

+

TTS checks the message context if it's the intended target for the message and will only speak in the following conditions:

+
    +
  • +

    Explicitly targeted i.e. the destination is native_source (default: "audio")

    +
  • +
  • +

    destination is set to None

    +
  • +
  • +

    destination is missing completely

    +
  • +
+

The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak.

+

TTS will be executed when a native_source (eg, audio) is the destination

+

A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the ovos-audio process, a web-interface, the KDE plasmoid or maybe the android app)

+

OVOS-Core

+

ovos-core is responsible for managing the routing context, skills do not usually need to worry about any of this

+
    +
  • +

    intent service will .reply to the original utterance message

    +
  • +
  • +

    all skill/intent service messages are .forward (from previous intent service .reply)

    +
  • +
+

Skills

+

OpenVoiceOS skills can do anything, if you are developing/installing a mission critical skill carefully evaluate what it does and evaluate if it is hivemind friendly

+

If a skill emits it's own bus messages it needs to keep message.context around

+

Common issues:

+
    +
  • +

    skills sending their own messages might not keep message.context or wrongly .reply to it

    +
  • +
  • +

    in the context of the Hivemind skills might not be Session aware and keep a shared state between clients, eg. a client may enable a voice game for everyone

    +
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/14_localhive/index.html b/14_localhive/index.html new file mode 100644 index 0000000..7a333a1 --- /dev/null +++ b/14_localhive/index.html @@ -0,0 +1,276 @@ + + + + + + + + LocalHive - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

LocalHive

+
+

⚠️ Proof of Concept: not actively maintained

+
+

img_10.png

+

The LocalHive is a hardened OpenVoiceOS skills service, the messagebus is replaced with a hivemind connection

+

*Coming Soon - Github

+

"security as a requirement, not a feature"

+
    +
  • the LocalHive is HTTP only
  • +
  • the LocalHive uses no crypto
  • +
  • the LocalHive does not require accessKey, instead it only accepts connections coming from 0.0.0.0
  • +
  • the LocalHive rejects all connections not coming from 0.0.0.0
  • +
  • the LocalHive runs in port 6989
  • +
  • skills can not listen to each other's traffic
  • +
  • skills can only inject whitelisted messages to LocalHive (by default intents + converse + speak)
  • +
  • by default skills only register and trigger intents, nothing else
  • +
  • each skill can run in it's own .venv with it's own requirements
  • +
  • TODO - skills should be able to request to listen for specific messages, cross skill communication is currently impossible
  • +
+

Permissions

+

skills need to be explicitly authorizes to send each message_type

+
UTTERANCES = ["recognizer_loop:utterance"]
+INTENTS = [
+    "mycroft.skill.handler.start",
+    "mycroft.skill.handler.complete",
+    "intent.service.intent.get",
+    "intent.service.skills.get",
+    "intent.service.active_skills.get",
+    "intent.service.adapt.get",
+    "intent.service.padatious.get",
+    "intent.service.adapt.manifest.get",
+    "intent.service.padatious.manifest.get",
+    "intent.service.adapt.vocab.manifest.get",
+    "intent.service.padatious.entities.manifest.get",
+    "register_vocab",
+    "register_intent",
+    "detach_intent",
+    "detach_skill",
+    "add_context",
+    "remove_context",
+    "clear_context",
+    'padatious:register_intent',
+    'padatious:register_entity',
+    "mycroft.skill.set_cross_context",
+    "mycroft.skill.remove_cross_context"
+]
+CONVERSE = [
+    "skill.converse.response",
+    "skill.converse.pong",
+    "active_skill_request",
+    "intent.service.skills.activated",
+    "intent.service.skills.deactivated",
+]
+SPEAK = ["speak"]
+STOP = ["mycroft.stop"]
+
+DEFAULT = INTENTS + \
+          CONVERSE + \
+          SPEAK + \
+          ["mycroft.skills.loaded"]
+
+

Per Skill Permissions

+

you can allow new messages per skill_id by editing the json file at ~/.config/LocalHive/skill_permissions.json

+
{
+    "ovos-stop.openvoiceos": ["mycroft.stop"]
+}
+
+

Usage

+

At this point of development you need to create python scripts to directly interface with the existing code

+

Running LocalHive

+
from local_hive.service import LocalHiveService
+from ovos_utils import wait_for_exit_signal
+
+if __name__ == "__main__":
+    localmind = LocalHiveService()
+    localmind.start()
+    wait_for_exit_signal()
+
+
+

Connecting a skill

+
from local_hive.loader import HiveMindExternalSkillWrapper
+from ovos_utils import wait_for_exit_signal
+from os.path import join, dirname
+
+path = join(dirname(__file__), "test_skills", "mycroft-joke.mycroftai")
+
+skill = HiveMindExternalSkillWrapper(path)
+
+wait_for_exit_signal()
+
+"""
+2021-05-14 03:23:50.059 | INFO     | 44310 | HiveMind-websocket-client | Connected
+2021-05-14 03:23:50.111 | INFO     | 44310 | mycroft.skills.settings:get_local_settings:83 | /home/user/.config/mycroft/skills/mycroft-joke.mycroftai/settings.json
+"""
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/15_nested/index.html b/15_nested/index.html new file mode 100644 index 0000000..f7f4900 --- /dev/null +++ b/15_nested/index.html @@ -0,0 +1,217 @@ + + + + + + + + Nested Hives - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Nested Hives

+

Now that you have a basic hive setup, you can add more Minds to it and connect them to each other.

+

To better understand how minds can interact with each read the protocol

+

Nested Hiveminds in Action

+

To illustrate the power of nested Hiveminds, let's consider a scenario where two housemates, let's call them Mom and Dad, each have their own AI assistant running on OpenVoiceOS, named John and Jane, respectively.

+

imagem

+

While sharing a house and most of their IoT devices, Mom and Dad want to ensure that their AI assistants can control the smart home individually without interfering with each other's commands. To achieve this, they create a Hive for their house, naming it George, with at least one instance of OpenVoiceOS acting as the brain.

+

Mom and Dad then connect their AI assistants, John and Jane, as clients to the George Hive. This setup allows John and Jane to communicate with George individually but not directly with each other. Instead, their messages pass through George, which acts as an intermediary, ensuring proper communication flow. John is connected to Dad's phone and calendar, it knows Dad's favorite songs. This ensures George is not bothered with personal data, and that Dad has a personalized experience. The same holds true for Jane and Mom, Alarms and Music Playlists don't get mixed together

+

imagem

+

It is important to note that as soon as a Hive is decoupled, such as when Mom and Dad split their Hives, they become their own independent Masters again.

+

Now, when Dad instructs his AI assistant to adjust the lights, the message goes through George. Similarly, when Mom asks her AI assistant to set the temperature, the command is routed through George. George becomes the central point of control for the shared devices, enabling independent control for John and Jane.

+

imagem

+

Moreover, if guests visit their house, Mom and Dad can grant them access to George directly, for example by using the voice satellites around the house, or they can create a guest Hive under George temporarily.

+

This flexibility allows for easy integration and disconnection of Hives as required.

+

imagem

+

Nested Hiveminds offer a dynamic and adaptable environment for managing AI systems and devices.

+

This hierarchical organization, with clusters nested within a Master Hive, provides a scalable and efficient framework

+

Permissions

+

Let's further explore the practical applications of nested Hiveminds by introducing another scenario. Imagine that Mom and Dad have a guest staying with them, and this guest, Bob, also has his own AI assistant. To provide Bob with access to the shared smart home functionalities, they allow Bob's AI assistant to connect to the George Hive as a client.

+

However, Mom and Dad want to ensure that Bob's AI assistant has limited permissions within their ecosystem. They configure hivemind-core, acting as a firewall, to restrict Bob's assistant from placing orders or accessing sensitive information from Mom and Dad. This fine-grained control ensures that the guest AI operates within defined boundaries, maintaining privacy and security for all parties involved.

+

imagem

+

Furthermore, consider a scenario where Mom and Dad have children. They can create a separate nested assistant for their kids, granting them access to specific functionalities suitable for their age and requirements. This nested assistant for the kids would have limited permissions and tailored interactions, providing a safe and engaging AI experience while keeping their privacy intact.

+

imagem

+

Nested Hiveminds offer a versatile framework for managing multiple AI assistants and customizing their capabilities based on individual needs and preferences.

+

By configuring access permissions and setting up appropriate firewalls, users can create an ecosystem that ensures privacy, security, and personalized experiences for each participant.

+

img_15.png

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/16_permissions/index.html b/16_permissions/index.html new file mode 100644 index 0000000..3333331 --- /dev/null +++ b/16_permissions/index.html @@ -0,0 +1,290 @@ + + + + + + + + Permissions - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

HiveMind Permission System

+

HiveMind's permission system provides fine-grained control over access to resources, such as bus messages, skills, and intents, on a per-client basis. Unlike traditional Role-Based Access Control (RBAC), HiveMind emphasizes client-specific configurations rather than predefined roles, allowing for dynamic and flexible access management.

+

Key Concepts

+
    +
  1. +

    Client-Specific Permissions:

    +
      +
    • Permissions in HiveMind are assigned to individual clients, such as users, devices, or applications. This means that each client can have a unique set of permissions based on its specific needs or restrictions.
    • +
    • Permissions control access to bus messages, skills, and intents, enabling dynamic configuration that is more granular and flexible compared to traditional RBAC systems.
    • +
    +
  2. +
  3. +

    No Predefined Roles:

    +
      +
    • HiveMind does not rely on predefined roles like “admin” or “user.” Instead, each client is configured independently with a tailored set of permissions.
    • +
    • For instance, a “basic client” might have access to general voice commands, while a “restricted client” could have specific skills or intents blocked.
    • +
    +
  4. +
  5. +

    Fine-Grained Access Control:

    +
      +
    • Permissions are not just binary (e.g., “allowed” or “denied”). They can be configured at a fine-grained level, allowing administrators to control access to specific resources, such as individual bus messages, skills, and intents.
    • +
    • This allows for maximum flexibility in defining which clients have access to what, down to the level of individual interactions.
    • +
    +
  6. +
  7. +

    Emergent Roles:

    +
      +
    • While there are no formal roles in HiveMind, roles can emerge through client-specific configurations. For example, a client with broad access might function like an "admin," while another client with limited access could serve as a "guest."
    • +
    • These roles are not predefined but are dynamically created based on the client’s permission settings.
    • +
    +
  8. +
+

Comparison to Traditional RBAC

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureTraditional RBACHiveMind Permission System
Role DefinitionPredefined roles (e.g., admin, user, guest)No predefined roles; permissions are assigned per client
PermissionsRoles are granted permissions to access resourcesPermissions are configured on a per-client basis
GranularityRoles typically have broad access to resourcesPermissions are fine-grained, allowing access control over individual resources (messages, skills, intents)
FlexibilityLess flexible, roles are staticHighly flexible, permissions can be dynamically adjusted per client
Emergent RolesPredefined roles based on job function or hierarchyRoles emerge based on client-specific configuration
+

How It Works

+
    +
  1. +

    Client Configuration:

    +
      +
    • Each client in the HiveMind ecosystem has a custom configuration that determines the actions it is allowed to perform. This configuration can be adjusted dynamically as needed.
    • +
    +
  2. +
  3. +

    Dynamic Permission Assignment:

    +
      +
    • Permissions are assigned on a per-client basis, providing administrators with the ability to specify which bus messages, skills, and intents each client can access or perform.
    • +
    +
  4. +
  5. +

    Examples:

    +
      +
    • A trusted client might be granted access to a wide range of skills and intents, including those requiring elevated privileges.
    • +
    • A restricted client could have specific actions or skills blacklisted to ensure it operates within a tightly controlled scope.
    • +
    +
  6. +
+

By leveraging client-specific configurations, HiveMind's permission system offers a highly customizable and secure approach to managing access across the ecosystem, allowing administrators to tailor the experience for each client based on their individual needs.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/17_database/index.html b/17_database/index.html new file mode 100644 index 0000000..a609c46 --- /dev/null +++ b/17_database/index.html @@ -0,0 +1,283 @@ + + + + + + + + Database - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Database Backends

+

hivemind-core supports multiple database backends to store client credentials and settings. Each has its own use case:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
BackendUse CaseDefault LocationCommand Line options
JSON (default)Simple, file-based setup for local use~/.local/share/hivemind-core/clients.jsonConfigurable via --db-name and --db-folder
SQLiteLightweight relational DB for single instances~/.local/share/hivemind-core/clients.dbConfigurable via --db-name and --db-folder
RedisDistributed, high-performance environmentslocalhost:6379Configurable via --redis-host and --redis-port
+
+

⚠️ ensure you use the same database parameters when launching hivemind-core and registering clients!

+
+

How to Choose?

+
    +
  • For scalability or multi-instance setups, use Redis.
  • +
  • For simplicity or single-device environments, use SQLite.
  • +
  • For development or to be able to edit the database by hand, use JSON.
  • +
+

Security Considerations

+

When using any of these backends, it’s important to implement security practices to safeguard sensitive data. Below are some considerations:

+

1. JSON (File-Based Storage)

+
    +
  • Security Risks: As JSON files are stored locally, they can be accessed directly by anyone with access to the file system. Without encryption, the data is vulnerable to unauthorized access.
  • +
  • Best Practices:
      +
    • File Permissions: Set restrictive permissions on the .json file to limit access to the user running hivemind-core.
    • +
    • Backups: Regularly back up this file to ensure recovery in case of data loss or corruption, while also securing backups with encryption.
    • +
    +
  • +
+

2. SQLite (Lightweight Relational Database)

+
    +
  • Security Risks: SQLite databases are stored in a file, making them susceptible to unauthorized access if file permissions are not properly configured.
  • +
  • Best Practices:
  • +
  • File Permissions: Ensure the SQLite file is owned by a specific user or group, with read and write access limited to only the user running hivemind-core.
      +
    • Database Backups: Always back up SQLite files securely and store backups in encrypted form.
    • +
    +
  • +
+

3. Redis (Distributed High-Performance)

+
    +
  • Security Risks: Redis is commonly used in distributed setups, which can introduce risks if the Redis server is exposed to the internet or local networks without proper security measures.
  • +
  • Best Practices:
  • +
  • Authentication: Always configure Redis authentication by setting a strong password using the requirepass directive in the Redis configuration file.
      +
    • Encryption: Use TLS/SSL encryption (--ssl flag) for data in transit. This ensures that data is encrypted between clients and Redis servers.
    • +
    • Access Control: Limit access to Redis to trusted clients and IP addresses by configuring the bind and protected-mode settings in the Redis configuration file.
    • +
    • Firewall: Use a firewall to restrict access to Redis from unauthorized networks, ensuring that only trusted systems can communicate with the Redis server.
    • +
    • Backups: Redis does not encrypt its persistent storage by default, so ensure that backup files (RDB/AOF) are stored securely and encrypted if necessary.
    • +
    +
  • +
+

General Database Security Tips:

+
    +
  • Sensitive Data Storage: Ensure that sensitive data, such as database backups, is stored securely (using encryption)
  • +
  • Regular Audits: Periodically audit your database access logs and configurations to ensure no unauthorized access has occurred.
  • +
  • Monitoring: Implement monitoring on your database systems to detect any unusual access patterns or unauthorized attempts to connect.
  • +
+

By following these best practices, you can ensure that your hivemind-core installation is secure and that client credentials and settings remain protected.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/18_binarization/index.html b/18_binarization/index.html new file mode 100644 index 0000000..1c7f50a --- /dev/null +++ b/18_binarization/index.html @@ -0,0 +1,602 @@ + + + + + + + + Binarization - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Binarization Protocol

+

The HiveMind Binarization Protocol is designed to efficiently serialize and deserialize structured messages into compact binary formats for network transmission. This document provides a high-level description of the protocol, including its structure, encoding rules, and the rationale behind key design decisions. The binary format is protocol-versioned to support backward compatibility and future extensions.

+
+

💡 the binarization scheme allows the hivemind protocol to be implemented by just flashing a light

+
+

Protocol Versions

+

The protocol uses an integer version number to indicate supported features and ensure compatibility between clients and servers. The current protocol version is 1. Any change in functionality or structure requires incrementing the version number.

+

Version-specific functionality:

+
    +
  • +

    Version 0: Original protocol design. No binarization, no handshake, only pre-shared crypto_key supported

    +
  • +
  • +

    Version 1: Introduces support for handshakes and binary payloads.

    +
  • +
+

Message Types

+

Messages in the HiveMind protocol are categorized into various types, each serving a specific role. These types are encoded as 5-bit unsigned integers, enabling up to 32 distinct types. Examples include:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueTypeDescription
0HANDSHAKEInitial connection handshake.
1BUSStandard message bus.
2SHARED_BUSShared bus for multiple nodes.
3BROADCASTGlobal message broadcast.
4PROPAGATEDirected message propagation.
12BINARYRaw binary payload.
+

Compression

+

Payloads can optionally be compressed using the zlib library. A single bit in the header indicates whether compression is applied. Compressed payloads reduce transmission size but may add slight computational overhead during encoding and decoding.

+

Metadata (HiveMeta)

+

HiveMeta is a reserved field for attaching arbitrary metadata to a message. The metadata is encoded as a byte array, prefixed by its size (in bytes). This allows for extensible features like routing hints or debug information.

+

Binary Message Structure

+

The serialized binary message consists of a header and a payload. All fields are packed to maximize efficiency. The structure is as follows:

+ +

The header contains information about the protocol version, message type, compression, and metadata length.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldSize (bits)Description
Start Marker1Always 1. Helps align message boundaries.
Versioned Flag1Indicates if protocol version is specified.
Protocol Version8Protocol version (if Versioned Flag is 1).
Message Type5Encoded message type.
Compressed Flag1Indicates if payload is compressed.
Metadata Length8Length of metadata in bytes.
+

Metadata

+

Metadata is optional and encodes key-value pairs or other information. If present, it follows the header and is serialized as a byte array. The length of the metadata is specified in the header.

+

Payload

+

The payload represents the core message data. Its format depends on the message type:

+
    +
  • +

    For standard messages: Encoded as a UTF-8 JSON string.

    +
  • +
  • +

    For binary messages: Encoded as raw bytes with an additional 4-bit unsigned integer indicating the binary payload type.

    +
  • +
+

Padding

+

To ensure byte alignment, padding bits (0) are inserted as needed. The total length of the message must be a multiple of 8 bits.

+

Encoding Process

+
    +
  1. +

    Start Marker: Add a single bit set to 1 to signify the start of the message.

    +
  2. +
  3. +

    Header Fields:

    +
      +
    • Add a 1-bit flag to indicate whether the protocol version is included.
    • +
    • If the version is included, append the 8-bit protocol version number.
    • +
    • Add a 5-bit message type field.
    • +
    • Add a 1-bit flag to indicate compression status.
    • +
    • Add an 8-bit metadata length field.
    • +
    +
  4. +
  5. +

    Metadata:

    +
      +
    • Serialize metadata as a JSON object (if any).
    • +
    • Compress the metadata if compression is enabled.
    • +
    • Append the serialized metadata.
    • +
    +
  6. +
  7. +

    Payload:

    +
      +
    • Serialize the payload according to the message type.
    • +
    • Compress the payload if compression is enabled.
    • +
    • Append the serialized payload.
    • +
    +
  8. +
  9. +

    Padding: Add 0 bits as needed to ensure the total length is a multiple of 8 bits.

    +
  10. +
+

Decoding Process

+
    +
  1. +

    Alignment: Read bits until encountering the start marker (1).

    +
  2. +
  3. +

    Header Fields:

    +
      +
    • Read the Versioned Flag and determine if the protocol version is specified.
    • +
    • If specified, read the 8-bit protocol version number.
    • +
    • Read the 5-bit message type field.
    • +
    • Read the Compressed Flag.
    • +
    • Read the 8-bit metadata length field.
    • +
    +
  4. +
  5. +

    Metadata:

    +
      +
    • Read the specified number of bytes for metadata.
    • +
    • Decompress if the Compressed Flag is set.
    • +
    • Deserialize the metadata.
    • +
    +
  6. +
  7. +

    Payload:

    +
      +
    • Read the remaining bits as the payload.
    • +
    • Decompress if the Compressed Flag is set.
    • +
    • Deserialize the payload based on the message type.
    • +
    +
  8. +
+

Binary Payloads

+

The protocol provides support for binary payloads, enabling the transmission of non-textual data. Binary payloads are handled based on their designated types, which instruct the HiveMind how to process the binary content.

+

The binary payload type is indicated in the header as a 4 bit unsigned integer after the metadata and before the payload

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueTypeDescription
0UNDEFINEDNo information provided about the binary contents.
1RAW_AUDIOBinary content is raw audio.
2NUMPY_IMAGEBinary content is an image represented as a numpy array (e.g., webcam picture).
3FILEBinary is a file to be saved; additional metadata is provided elsewhere.
4STT_AUDIO_TRANSCRIBEFull audio sentence to perform Speech-to-Text (STT) and return transcripts.
5STT_AUDIO_HANDLEFull audio sentence to perform STT and handle transcription immediately.
6TTS_AUDIOSynthesized Text-to-Speech (TTS) audio to be played.
+
+

💡 this how how the microphone satellite streams audio to hivemind-listener

+
+

Examples

+

Serialized Message

+

For a simple message with:

+
    +
  • +

    Protocol version: 1

    +
  • +
  • +

    Message type: BUS

    +
  • +
  • +

    No compression

    +
  • +
  • +

    Metadata: {}

    +
  • +
  • +

    Payload: {"type": "speak", "data":{"utterance": "Hello"}}

    +
  • +
+

The binary representation might look like this (in bit groups):

+
1 | 1 | 00000001 | 00001 | 0 | 00000000 | <metadata> | <payload>
+
+

Where:

+
    +
  • +

    1 (Start Marker)

    +
  • +
  • +

    1 (Versioned Flag)

    +
  • +
  • +

    00000001 (Protocol Version)

    +
  • +
  • +

    00001 (Message Type: BUS)

    +
  • +
  • +

    0 (Compressed Flag)

    +
  • +
  • +

    00000000 (Metadata Length: 0 bytes)

    +
  • +
  • +

    <metadata>: Serialized metadata bytes.

    +
  • +
  • +

    <payload>: Serialized payload bytes.

    +
  • +
+

Binary data

+

For a binary payload with:

+
    +
  • +

    Protocol version: 1

    +
  • +
  • +

    Message type: BINARY

    +
  • +
  • +

    No compression

    +
  • +
  • +

    Metadata: {}

    +
  • +
  • +

    Binary Payload

    +
  • +
+

The binary representation might look like this (in bit groups):

+
1 | 1 | 00000001 | 00001 | 0 | 00000000 | <metadata> | 0001 | <binary_payload>
+
+

Where:

+
    +
  • +

    1 (Start Marker)

    +
  • +
  • +

    1 (Versioned Flag)

    +
  • +
  • +

    00000001 (Protocol Version)

    +
  • +
  • +

    01100 (Message Type: BINARY)

    +
  • +
  • +

    0 (Compressed Flag)

    +
  • +
  • +

    00000000 (Metadata Length: 0 bytes)

    +
  • +
  • +

    <metadata>: Serialized metadata bytes.

    +
  • +
  • +

    0001 (Binary Type: Raw audio)

    +
  • +
  • +

    <payload>: audio bytes.

    +
  • +
+

More examples

+
<uint:1=start_marker> | <uint:1=versioned_bit> | <uint:8=protocol_version> | <uint:5=msg_type> | <uint:1=compression_bit> | <uint:8=metadata_len> | <metadata> | <payload>
+
+

A binarized message

+
1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | <metadata> | <payload>
+
+

A unversioned binarized message

+
1 | 0 | XXXXX | X | XXXXXXXX | <metadata> | <payload>
+
+

A binary payload message

+
1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | <metadata> | XXXX | <payload>
+
+

Compression Metrics

+

Compression significantly reduces payload size for larger messages but is not always efficient for small messages. Benchmarks indicate a reduction of up to 50% for text-heavy payloads, while small payloads may see negligible benefits.

+

Implementation Notes

+
    +
  • +

    Bit-level operations are critical for compact encoding. Ensure precision when handling individual bits.

    +
  • +
  • +

    Maintain strict alignment rules to avoid deserialization errors.

    +
  • +
  • +

    Use a modular design to allow future extensions while retaining compatibility.

    +
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/19_crypto/index.html b/19_crypto/index.html new file mode 100644 index 0000000..fb7a40e --- /dev/null +++ b/19_crypto/index.html @@ -0,0 +1,300 @@ + + + + + + + + Encryption - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Encryption

+

HiveMind ensures secure communication between devices by using modern cryptographic techniques. This page provides an overview of how messages are encrypted in transit, the structure of encrypted messages, and the process of encryption key generation.

+
    +
  • End-to-End Encryption: Messages are encrypted on the sender's device and decrypted only on the receiver's device, ensuring complete confidentiality.
  • +
  • Mutual Authentication: The identity verification step ensures that both devices share the same credentials and trust each other.
  • +
  • Resistance to Replay Attacks: The use of unique IVs for each message prevents attackers from reusing captured messages.
  • +
  • Strong Key Derivation: By leveraging PBKDF2 and a shared salt, HiveMind protects against brute-force and dictionary attacks.
  • +
+
+

Terminology

+

Before diving into the details, here are key terms used in this documentation:

+
    +
  • Plaintext: Unencrypted data, the original readable content before encryption.
  • +
  • Ciphertext: Encrypted data, unreadable without the decryption key.
  • +
  • AES: Advanced Encryption Standard, a symmetric encryption algorithm.
  • +
  • GCM: Galois/Counter Mode, a mode of operation for AES that provides both encryption and message authentication.
  • +
  • IV: Initialization vector (sometimes called nonce), a unique value used to initialize encryption and ensure message uniqueness.
  • +
  • MAC: Message Authentication Code (sometimes called tag or Integrity Check Value (ICV)), used to verify the authenticity and integrity of a message.
  • +
  • Salt: A random value used during key derivation to ensure unique and secure key generation, even with repeated passwords.
  • +
  • SHA-2: A family of cryptographic hash functions, used for generating hash values (e.g., SHA-256 is used in HiveMind).
  • +
  • PBKDF2: Password-Based Key Derivation Function 2, a cryptographic function that strengthens passwords by applying hashing multiple times to derive secure keys.
  • +
+
+

Overview

+

HiveMind uses AES-GCM (Advanced Encryption Standard in Galois/Counter Mode) for authenticated encryption of messages in transit. This method provides both confidentiality (protecting the message content) and integrity (ensuring that messages have not been tampered with during transmission).

+

To securely exchange messages, HiveMind leverages a key derivation and exchange mechanism that avoids directly sharing sensitive credentials, ensuring secure key generation and agreement between devices.

+
+

Encryption in Transit

+

When two devices communicate over HiveMind, all messages are encrypted before being transmitted over the network. This prevents eavesdropping and ensures that any intercepted messages are unreadable without the appropriate key.

+

Each encrypted message contains the following components:

+
{
+  "ciphertext": "<encrypted_message>",
+  "tag": "<authentication_tag>",
+  "nonce": "<initialization_vector>"
+}
+
+
    +
  • Ciphertext: The encrypted form of the original plaintext message.
  • +
  • Tag: A message authentication code (MAC) that ensures the integrity and authenticity of the message.
  • +
  • Nonce (IV): A unique initialization vector used for encryption. It ensures that each message is encrypted uniquely, even if the same key and plaintext are reused.
  • +
+

The nonce and tag are included in the message unencrypted, while the ciphertext remains confidential. This allows the receiving device to verify and decrypt the message.

+
+

Key Generation and Exchange

+

To securely encrypt and decrypt messages, HiveMind uses a shared secret key. This key is never transmitted directly but is derived independently by each device using the following steps:

+

HANDSHAKE.png

+

1. Handshake and Identity Verification

+
    +
  • Each device generates a hash-based subkey (HSUB) using:
      +
    • A randomly generated initialization vector (IV).
    • +
    • The user's password (or pre-shared secret).
    • +
    • A cryptographic hash function, such as SHA-256.
    • +
    +
  • +
  • Devices exchange their HSUB and IV values over the network.
  • +
  • Upon receiving the other's HSUB, each device regenerates it locally using the received IV and its own password. If the computed and received HSUB match, the devices verify each other's identity.
  • +
+

2. Deriving a Common Salt

+
    +
  • A shared salt is generated by XORing the IV values exchanged during the handshake:
  • +
+
Salt = IV_A ⊕ IV_B
+
+
    +
  • This salt ensures that each session has a unique basis for key derivation.
  • +
+

3. Key Derivation

+
    +
  • Both devices derive a common secret key using the following inputs:
      +
    • The salt from the previous step.
    • +
    • The user's password.
    • +
    +
  • +
  • The key derivation process uses PBKDF2 (Password-Based Key Derivation Function 2) with HMAC-SHA256 to produce a cryptographically strong key.
  • +
+

This approach ensures that both devices independently derive the same encryption key without directly transmitting it over the network.

+
+

Secure Message Exchange

+

Once the secret key is derived, it is used to encrypt and decrypt all messages exchanged between devices. The process is as follows:

+
    +
  1. +

    Encryption: + - The sending device uses the secret key to encrypt the plaintext message using AES-GCM. + - The resulting ciphertext, along with the nonce and tag, is packaged into a message and sent over the network.

    +
  2. +
  3. +

    Decryption: + - The receiving device extracts the ciphertext, nonce, and tag from the message. + - Using the same secret key, it decrypts the ciphertext and verifies the message integrity using the tag.

    +
  4. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/404.html b/404.html new file mode 100644 index 0000000..30ab8e9 --- /dev/null +++ b/404.html @@ -0,0 +1,177 @@ + + + + + + + + Documentation + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • +
  • +
  • +
+
+
+
+
+ + +

404

+ +

Page not found

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/HANDSHAKE_V1.png b/HANDSHAKE_V1.png new file mode 100644 index 0000000..f329ad8 Binary files /dev/null and b/HANDSHAKE_V1.png differ diff --git a/TODO/index.html b/TODO/index.html new file mode 100644 index 0000000..01a555d --- /dev/null +++ b/TODO/index.html @@ -0,0 +1,222 @@ + + + + + + + + TODO - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

wormhole node

+
    +
  • its actually 2 nodes
  • +
  • node 1 drops messages in place X
  • +
  • node 2 retrieves messages from place X
  • +
  • messages are literal hive protocol messages
  • +
  • X is any transport layer, literally anything
  • +
  • nodes might not know each other at all as long as they know how to retrieve stuff
  • +
  • the objective is hiding location
  • +
  • any master that sees node 2 just thinks it is node 1!
  • +
+

implementations: +- usenet anon message boards (read # post #)

+

storage node

+
    +
  • any node can leave a payload in a storage node + associated proof
  • +
  • optionally encrypted (recommended)
  • +
  • a proof is a text string + same string encrypted with "receiver pubkey"
  • +
  • any node can request any (encrypted) file
  • +
  • the storage node will send the encrypted proof
  • +
  • the node sends the decrypted proof
  • +
  • if both match the node proves it is the receiver
  • +
  • the storage node sends the file
  • +
+

note: connections to these nodes should be ephemeral, ie, nodes disconnect once the deed is done +note2: these can be public and hosted by random people, if you trust PGP

+

rendevouz node

+

a variation of the above, imagine a scenario with a very very large hive, maybe some nodes are even public or half way across the world

+
    +
  • node fires a "query" hive message
  • +
  • message contains the address of a storage node the mind should drop the answer in
      +
    • the response doesnt need to travel all the way back
    • +
    • optionally include node pubkey (may have been shared out of band)
    • +
    +
  • +
  • node checks the pre defined storage node every timestep until it receives an answer
      +
    • depending on relationship with storage node it might be possible to use events instead
    • +
    • a storage node can be a http api (see http bridge TODO)
    • +
    +
  • +
  • where did the answer come from?
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/css/fonts/Roboto-Slab-Bold.woff b/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 0000000..6cb6000 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/css/fonts/Roboto-Slab-Bold.woff2 b/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 0000000..7059e23 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/css/fonts/Roboto-Slab-Regular.woff b/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 0000000..f815f63 Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/css/fonts/Roboto-Slab-Regular.woff2 b/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 0000000..f2c76e5 Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/css/fonts/fontawesome-webfont.eot b/css/fonts/fontawesome-webfont.eot new file mode 100644 index 0000000..e9f60ca Binary files /dev/null and b/css/fonts/fontawesome-webfont.eot differ diff --git a/css/fonts/fontawesome-webfont.svg b/css/fonts/fontawesome-webfont.svg new file mode 100644 index 0000000..855c845 --- /dev/null +++ b/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/css/fonts/fontawesome-webfont.ttf b/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 0000000..35acda2 Binary files /dev/null and b/css/fonts/fontawesome-webfont.ttf differ diff --git a/css/fonts/fontawesome-webfont.woff b/css/fonts/fontawesome-webfont.woff new file mode 100644 index 0000000..400014a Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff differ diff --git a/css/fonts/fontawesome-webfont.woff2 b/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 0000000..4d13fc6 Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff2 differ diff --git a/css/fonts/lato-bold-italic.woff b/css/fonts/lato-bold-italic.woff new file mode 100644 index 0000000..88ad05b Binary files /dev/null and b/css/fonts/lato-bold-italic.woff differ diff --git a/css/fonts/lato-bold-italic.woff2 b/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 0000000..c4e3d80 Binary files /dev/null and b/css/fonts/lato-bold-italic.woff2 differ diff --git a/css/fonts/lato-bold.woff b/css/fonts/lato-bold.woff new file mode 100644 index 0000000..c6dff51 Binary files /dev/null and b/css/fonts/lato-bold.woff differ diff --git a/css/fonts/lato-bold.woff2 b/css/fonts/lato-bold.woff2 new file mode 100644 index 0000000..bb19504 Binary files /dev/null and b/css/fonts/lato-bold.woff2 differ diff --git a/css/fonts/lato-normal-italic.woff b/css/fonts/lato-normal-italic.woff new file mode 100644 index 0000000..76114bc Binary files /dev/null and b/css/fonts/lato-normal-italic.woff differ diff --git a/css/fonts/lato-normal-italic.woff2 b/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 0000000..3404f37 Binary files /dev/null and b/css/fonts/lato-normal-italic.woff2 differ diff --git a/css/fonts/lato-normal.woff b/css/fonts/lato-normal.woff new file mode 100644 index 0000000..ae1307f Binary files /dev/null and b/css/fonts/lato-normal.woff differ diff --git a/css/fonts/lato-normal.woff2 b/css/fonts/lato-normal.woff2 new file mode 100644 index 0000000..3bf9843 Binary files /dev/null and b/css/fonts/lato-normal.woff2 differ diff --git a/css/theme.css b/css/theme.css new file mode 100644 index 0000000..ad77300 --- /dev/null +++ b/css/theme.css @@ -0,0 +1,13 @@ +/* + * This file is copied from the upstream ReadTheDocs Sphinx + * theme. To aid upgradability this file should *not* be edited. + * modifications we need should be included in theme_extra.css. + * + * https://github.com/readthedocs/sphinx_rtd_theme + */ + + /* sphinx_rtd_theme version 1.2.0 | MIT license */ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel{border:1px solid #7fbbe3;background:#e7f2fa;font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} diff --git a/css/theme_extra.css b/css/theme_extra.css new file mode 100644 index 0000000..ab0631a --- /dev/null +++ b/css/theme_extra.css @@ -0,0 +1,197 @@ +/* + * Wrap inline code samples otherwise they shoot of the side and + * can't be read at all. + * + * https://github.com/mkdocs/mkdocs/issues/313 + * https://github.com/mkdocs/mkdocs/issues/233 + * https://github.com/mkdocs/mkdocs/issues/834 + */ +.rst-content code { + white-space: pre-wrap; + word-wrap: break-word; + padding: 2px 5px; +} + +/** + * Make code blocks display as blocks and give them the appropriate + * font size and padding. + * + * https://github.com/mkdocs/mkdocs/issues/855 + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/issues/233 + */ +.rst-content pre code { + white-space: pre; + word-wrap: normal; + display: block; + padding: 12px; + font-size: 12px; +} + +/** + * Fix code colors + * + * https://github.com/mkdocs/mkdocs/issues/2027 + */ +.rst-content code { + color: #E74C3C; +} + +.rst-content pre code { + color: #000; + background: #f8f8f8; +} + +/* + * Fix link colors when the link text is inline code. + * + * https://github.com/mkdocs/mkdocs/issues/718 + */ +a code { + color: #2980B9; +} +a:hover code { + color: #3091d1; +} +a:visited code { + color: #9B59B6; +} + +/* + * The CSS classes from highlight.js seem to clash with the + * ReadTheDocs theme causing some code to be incorrectly made + * bold and italic. + * + * https://github.com/mkdocs/mkdocs/issues/411 + */ +pre .cs, pre .c { + font-weight: inherit; + font-style: inherit; +} + +/* + * Fix some issues with the theme and non-highlighted code + * samples. Without and highlighting styles attached the + * formatting is broken. + * + * https://github.com/mkdocs/mkdocs/issues/319 + */ +.rst-content .no-highlight { + display: block; + padding: 0.5em; + color: #333; +} + + +/* + * Additions specific to the search functionality provided by MkDocs + */ + +.search-results { + margin-top: 23px; +} + +.search-results article { + border-top: 1px solid #E1E4E5; + padding-top: 24px; +} + +.search-results article:first-child { + border-top: none; +} + +form .search-query { + width: 100%; + border-radius: 50px; + padding: 6px 12px; + border-color: #D1D4D5; +} + +/* + * Improve inline code blocks within admonitions. + * + * https://github.com/mkdocs/mkdocs/issues/656 + */ + .rst-content .admonition code { + color: #404040; + border: 1px solid #c7c9cb; + border: 1px solid rgba(0, 0, 0, 0.2); + background: #f8fbfd; + background: rgba(255, 255, 255, 0.7); +} + +/* + * Account for wide tables which go off the side. + * Override borders to avoid weirdness on narrow tables. + * + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/pull/1034 + */ +.rst-content .section .docutils { + width: 100%; + overflow: auto; + display: block; + border: none; +} + +td, th { + border: 1px solid #e1e4e5 !important; + border-collapse: collapse; +} + +/* + * Without the following amendments, the navigation in the theme will be + * slightly cut off. This is due to the fact that the .wy-nav-side has a + * padding-bottom of 2em, which must not necessarily align with the font-size of + * 90 % on the .rst-current-version container, combined with the padding of 12px + * above and below. These amendments fix this in two steps: First, make sure the + * .rst-current-version container has a fixed height of 40px, achieved using + * line-height, and then applying a padding-bottom of 40px to this container. In + * a second step, the items within that container are re-aligned using flexbox. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ + .wy-nav-side { + padding-bottom: 40px; +} + +/* For section-index only */ +.wy-menu-vertical .current-section p { + background-color: #e3e3e3; + color: #404040; +} + +/* + * The second step of above amendment: Here we make sure the items are aligned + * correctly within the .rst-current-version container. Using flexbox, we + * achieve it in such a way that it will look like the following: + * + * [No repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * [With repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-versions .rst-current-version { + padding: 0 12px; + display: flex; + font-size: initial; + justify-content: space-between; + align-items: center; + line-height: 40px; +} + +/* + * Please note that this amendment also involves removing certain inline-styles + * from the file ./mkdocs/themes/readthedocs/versions.html. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-current-version span { + flex: 1; + text-align: center; +} diff --git a/gpt_bridges/index.html b/gpt_bridges/index.html new file mode 100644 index 0000000..4b35ccd --- /dev/null +++ b/gpt_bridges/index.html @@ -0,0 +1,202 @@ + + + + + + + + Exploring HiveMind Web Chat Interface and Bridges: Extending AI Capabilities - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

Exploring HiveMind Web Chat Interface and Bridges: Extending AI Capabilities

+

In the ever-expanding landscape of AI and interconnected systems, the HiveMind framework continues to push boundaries and open up new possibilities. As part of the HiveMind ecosystem, the HiveMind Web Chat Interface and HiveMind Bridges offer exciting avenues for integrating AI capabilities into various platforms and enabling seamless communication with AI assistants. In this blog post, we will delve into the world of HiveMind Bridges and explore how the HiveMind Web Chat Interface enhances user experiences.

+

HiveMind Bridges: Connecting the Dots

+

HiveMind Bridges serve as connectors between external platforms and the HiveMind network. +These bridges act as terminals, enabling communication with the HiveMind infrastructure. +With the support of various protocols such as Matrix, Mattermost Bridge, HackChat Bridge, DeltaChat Bridge, email, and more, HiveMind Bridges extend the reach of AI assistants and allow them to interact with users through familiar channels.

+

Each bridge behaves like a secure intermediary, ensuring the safety and privacy of communications. +They maintain their own session and permissions, allowing them to answer specific users or adhere to custom rules defined within the bridge. This flexibility makes it possible to integrate AI assistants seamlessly into existing communication platforms, expanding their capabilities and enhancing user interactions.

+

HiveMind Web Chat Interface: Unleashing AI Potential

+

The HiveMind Webchat Interface, powered by HiveMindJs provides a user-friendly and versatile solution for connecting to the HiveMind network. +This JavaScript library enables direct communication with the HiveMind infrastructure when access keys are available in the browser environment. +For instance, a login page with HiveMind access keys can leverage HiveMindJS to establish a secure connection, granting users access to AI functionalities seamlessly.

+

However, there may be situations where exposing HiveMind login keys in the browser is not desirable for security reasons. +In such cases, a HiveMind Bridge comes into play. Acting as a middle layer, the bridge node safely connects to the HiveMind network on a server, while the browser interacts solely with the bridge. +This architecture ensures that sensitive information remains protected, and communication with the HiveMind is conducted securely.

+

Integrating a Chatbot with Existing Business Platforms

+

Let's consider a practical example of leveraging the HiveMind ecosystem to integrate a chatbot into an existing business platform. Suppose you have a thriving online platform where users engage with your products or services. By hosting HiveMind-Core, Ovos-Core, and a HiveMind Bridge, you can seamlessly integrate a chatbot powered by AI into your platform.

+

The HiveMind Bridge, acting as the intermediary, facilitates communication between your platform and the HiveMind network. Users can interact with the chatbot, ask questions, seek assistance, or perform specific actions directly from within your platform. The chatbot, backed by the extensive capabilities of the HiveMind infrastructure, can provide personalized responses, offer recommendations, and enhance user experiences.

+

By incorporating a chatbot into your existing platform, you streamline customer support, automate certain processes, and deliver a more interactive and efficient user experience. The HiveMind ecosystem, with its powerful AI capabilities and flexible bridges, empowers businesses to leverage AI technologies seamlessly, unlocking new opportunities for growth and innovation.

+

Conclusion

+

The HiveMind Web Chat Interface and HiveMind Bridges revolutionize the way AI assistants integrate into diverse platforms. Through bridges, AI systems gain access to popular communication channels, while the HiveMind Web Chat Interface facilitates direct communication with the HiveMind network. With the ability to securely connect to the HiveMind infrastructure and extend AI capabilities, businesses can create immersive, interactive, and intelligent experiences for their users.

+

As the HiveMind ecosystem continues to evolve, we anticipate even more innovative use cases and seamless integrations. The future holds immense potential for expanding AI's reach and enhancing human-AI collaboration. With

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/gpt_eli5/index.html b/gpt_eli5/index.html new file mode 100644 index 0000000..6647cff --- /dev/null +++ b/gpt_eli5/index.html @@ -0,0 +1,210 @@ + + + + + + + + ELI5 - Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

ELI5

+

Imagine you and your friends want to play a game together, but you're all in different rooms. +To make it possible to play together, you need to follow some rules and communicate with each other. +That's what the HiveMind protocol does for voice assistants.

+

The HiveMind protocol is like a set of rules that voice assistants use to talk to each other and work together. +It helps them understand each other and coordinate their actions. +Just like in the game, the voice assistants need to agree on how they will communicate and what they can do together.

+

For example, let's say you and your friend have voice assistants. +They use the HiveMind protocol to talk to each other. +Your voice assistant can send messages to your friend's voice assistant, and vice versa. +These messages can be commands, questions, or even just saying "hello."

+

The HiveMind protocol also helps the voice assistants recognize each other. +They have special identities that let them know who is who. +It's like having name tags so they can say, "Hi, I'm Voice Assistant A!" or "Nice to meet you, Voice Assistant B!"

+

To follow the HiveMind protocol, the voice assistants use a special language that they all understand. +It's like having a secret code that only they know. This language helps them communicate in a way that makes sense to them.

+

The HiveMind protocol also includes some safety measures. +Just like you have rules to keep you safe when playing games, the voice assistants have rules too. +They make sure that only trusted voice assistants can join and participate. +They use things like passwords and encryption to keep things secure.

+

So, the HiveMind protocol is like a set of rules and a special language that voice assistants use to talk to each other, understand each other, and work together. +It helps them play the game of being voice assistants in a fun and safe way!

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + +
+ + + + + + + + + diff --git a/img.png b/img.png new file mode 100644 index 0000000..562a064 Binary files /dev/null and b/img.png differ diff --git a/img/favicon.ico b/img/favicon.ico new file mode 100644 index 0000000..e85006a Binary files /dev/null and b/img/favicon.ico differ diff --git a/img_1.png b/img_1.png new file mode 100644 index 0000000..ddf4d30 Binary files /dev/null and b/img_1.png differ diff --git a/img_10.png b/img_10.png new file mode 100644 index 0000000..cd2123a Binary files /dev/null and b/img_10.png differ diff --git a/img_11.png b/img_11.png new file mode 100644 index 0000000..3a8c13e Binary files /dev/null and b/img_11.png differ diff --git a/img_12.png b/img_12.png new file mode 100644 index 0000000..e474976 Binary files /dev/null and b/img_12.png differ diff --git a/img_13.png b/img_13.png new file mode 100644 index 0000000..da9c1de Binary files /dev/null and b/img_13.png differ diff --git a/img_14.png b/img_14.png new file mode 100644 index 0000000..bc83d61 Binary files /dev/null and b/img_14.png differ diff --git a/img_15.png b/img_15.png new file mode 100644 index 0000000..b46b11a Binary files /dev/null and b/img_15.png differ diff --git a/img_16.png b/img_16.png new file mode 100644 index 0000000..0324749 Binary files /dev/null and b/img_16.png differ diff --git a/img_17.png b/img_17.png new file mode 100644 index 0000000..65a407d Binary files /dev/null and b/img_17.png differ diff --git a/img_18.png b/img_18.png new file mode 100644 index 0000000..39d835f Binary files /dev/null and b/img_18.png differ diff --git a/img_19.png b/img_19.png new file mode 100644 index 0000000..4acf45f Binary files /dev/null and b/img_19.png differ diff --git a/img_2.png b/img_2.png new file mode 100644 index 0000000..b39c669 Binary files /dev/null and b/img_2.png differ diff --git a/img_20.png b/img_20.png new file mode 100644 index 0000000..04f4c3b Binary files /dev/null and b/img_20.png differ diff --git a/img_3.png b/img_3.png new file mode 100644 index 0000000..1ffaee6 Binary files /dev/null and b/img_3.png differ diff --git a/img_4.png b/img_4.png new file mode 100644 index 0000000..e2f1a61 Binary files /dev/null and b/img_4.png differ diff --git a/img_5.png b/img_5.png new file mode 100644 index 0000000..6741a72 Binary files /dev/null and b/img_5.png differ diff --git a/img_6.png b/img_6.png new file mode 100644 index 0000000..7c73d3d Binary files /dev/null and b/img_6.png differ diff --git a/img_7.png b/img_7.png new file mode 100644 index 0000000..0b17786 Binary files /dev/null and b/img_7.png differ diff --git a/img_8.png b/img_8.png new file mode 100644 index 0000000..87264e6 Binary files /dev/null and b/img_8.png differ diff --git a/img_9.png b/img_9.png new file mode 100644 index 0000000..1c7f3a2 Binary files /dev/null and b/img_9.png differ diff --git a/index.html b/index.html new file mode 100644 index 0000000..3a993bf --- /dev/null +++ b/index.html @@ -0,0 +1,200 @@ + + + + + + + + Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

HiveMind Community Documentation

+

Welcome to the HiveMind Community Docs!

+

+

HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system.

+

With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS!

+

HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. +OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS, as your central unit.

+

Join Hivemind Matrix chat for general news, support and chit chat

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + Next » + + +
+ + + + + + + + + + + diff --git a/js/html5shiv.min.js b/js/html5shiv.min.js new file mode 100644 index 0000000..1a01c94 --- /dev/null +++ b/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); diff --git a/js/jquery-3.6.0.min.js b/js/jquery-3.6.0.min.js new file mode 100644 index 0000000..c4c6022 --- /dev/null +++ b/js/jquery-3.6.0.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t + + + + + + + Documentation + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • +
  • +
  • +
+
+
+
+
+ + +

Search Results

+ + + +
+ Searching... +
+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/search/lunr.js b/search/lunr.js new file mode 100644 index 0000000..aca0a16 --- /dev/null +++ b/search/lunr.js @@ -0,0 +1,3475 @@ +/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */ + +;(function(){ + +/** + * A convenience function for configuring and constructing + * a new lunr Index. + * + * A lunr.Builder instance is created and the pipeline setup + * with a trimmer, stop word filter and stemmer. + * + * This builder object is yielded to the configuration function + * that is passed as a parameter, allowing the list of fields + * and other builder parameters to be customised. + * + * All documents _must_ be added within the passed config function. + * + * @example + * var idx = lunr(function () { + * this.field('title') + * this.field('body') + * this.ref('id') + * + * documents.forEach(function (doc) { + * this.add(doc) + * }, this) + * }) + * + * @see {@link lunr.Builder} + * @see {@link lunr.Pipeline} + * @see {@link lunr.trimmer} + * @see {@link lunr.stopWordFilter} + * @see {@link lunr.stemmer} + * @namespace {function} lunr + */ +var lunr = function (config) { + var builder = new lunr.Builder + + builder.pipeline.add( + lunr.trimmer, + lunr.stopWordFilter, + lunr.stemmer + ) + + builder.searchPipeline.add( + lunr.stemmer + ) + + config.call(builder, builder) + return builder.build() +} + +lunr.version = "2.3.9" +/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A namespace containing utils for the rest of the lunr library + * @namespace lunr.utils + */ +lunr.utils = {} + +/** + * Print a warning message to the console. + * + * @param {String} message The message to be printed. + * @memberOf lunr.utils + * @function + */ +lunr.utils.warn = (function (global) { + /* eslint-disable no-console */ + return function (message) { + if (global.console && console.warn) { + console.warn(message) + } + } + /* eslint-enable no-console */ +})(this) + +/** + * Convert an object to a string. + * + * In the case of `null` and `undefined` the function returns + * the empty string, in all other cases the result of calling + * `toString` on the passed object is returned. + * + * @param {Any} obj The object to convert to a string. + * @return {String} string representation of the passed object. + * @memberOf lunr.utils + */ +lunr.utils.asString = function (obj) { + if (obj === void 0 || obj === null) { + return "" + } else { + return obj.toString() + } +} + +/** + * Clones an object. + * + * Will create a copy of an existing object such that any mutations + * on the copy cannot affect the original. + * + * Only shallow objects are supported, passing a nested object to this + * function will cause a TypeError. + * + * Objects with primitives, and arrays of primitives are supported. + * + * @param {Object} obj The object to clone. + * @return {Object} a clone of the passed object. + * @throws {TypeError} when a nested object is passed. + * @memberOf Utils + */ +lunr.utils.clone = function (obj) { + if (obj === null || obj === undefined) { + return obj + } + + var clone = Object.create(null), + keys = Object.keys(obj) + + for (var i = 0; i < keys.length; i++) { + var key = keys[i], + val = obj[key] + + if (Array.isArray(val)) { + clone[key] = val.slice() + continue + } + + if (typeof val === 'string' || + typeof val === 'number' || + typeof val === 'boolean') { + clone[key] = val + continue + } + + throw new TypeError("clone is not deep and does not support nested objects") + } + + return clone +} +lunr.FieldRef = function (docRef, fieldName, stringValue) { + this.docRef = docRef + this.fieldName = fieldName + this._stringValue = stringValue +} + +lunr.FieldRef.joiner = "/" + +lunr.FieldRef.fromString = function (s) { + var n = s.indexOf(lunr.FieldRef.joiner) + + if (n === -1) { + throw "malformed field ref string" + } + + var fieldRef = s.slice(0, n), + docRef = s.slice(n + 1) + + return new lunr.FieldRef (docRef, fieldRef, s) +} + +lunr.FieldRef.prototype.toString = function () { + if (this._stringValue == undefined) { + this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef + } + + return this._stringValue +} +/*! + * lunr.Set + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A lunr set. + * + * @constructor + */ +lunr.Set = function (elements) { + this.elements = Object.create(null) + + if (elements) { + this.length = elements.length + + for (var i = 0; i < this.length; i++) { + this.elements[elements[i]] = true + } + } else { + this.length = 0 + } +} + +/** + * A complete set that contains all elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.complete = { + intersect: function (other) { + return other + }, + + union: function () { + return this + }, + + contains: function () { + return true + } +} + +/** + * An empty set that contains no elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.empty = { + intersect: function () { + return this + }, + + union: function (other) { + return other + }, + + contains: function () { + return false + } +} + +/** + * Returns true if this set contains the specified object. + * + * @param {object} object - Object whose presence in this set is to be tested. + * @returns {boolean} - True if this set contains the specified object. + */ +lunr.Set.prototype.contains = function (object) { + return !!this.elements[object] +} + +/** + * Returns a new set containing only the elements that are present in both + * this set and the specified set. + * + * @param {lunr.Set} other - set to intersect with this set. + * @returns {lunr.Set} a new set that is the intersection of this and the specified set. + */ + +lunr.Set.prototype.intersect = function (other) { + var a, b, elements, intersection = [] + + if (other === lunr.Set.complete) { + return this + } + + if (other === lunr.Set.empty) { + return other + } + + if (this.length < other.length) { + a = this + b = other + } else { + a = other + b = this + } + + elements = Object.keys(a.elements) + + for (var i = 0; i < elements.length; i++) { + var element = elements[i] + if (element in b.elements) { + intersection.push(element) + } + } + + return new lunr.Set (intersection) +} + +/** + * Returns a new set combining the elements of this and the specified set. + * + * @param {lunr.Set} other - set to union with this set. + * @return {lunr.Set} a new set that is the union of this and the specified set. + */ + +lunr.Set.prototype.union = function (other) { + if (other === lunr.Set.complete) { + return lunr.Set.complete + } + + if (other === lunr.Set.empty) { + return this + } + + return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) +} +/** + * A function to calculate the inverse document frequency for + * a posting. This is shared between the builder and the index + * + * @private + * @param {object} posting - The posting for a given term + * @param {number} documentCount - The total number of documents. + */ +lunr.idf = function (posting, documentCount) { + var documentsWithTerm = 0 + + for (var fieldName in posting) { + if (fieldName == '_index') continue // Ignore the term index, its not a field + documentsWithTerm += Object.keys(posting[fieldName]).length + } + + var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) + + return Math.log(1 + Math.abs(x)) +} + +/** + * A token wraps a string representation of a token + * as it is passed through the text processing pipeline. + * + * @constructor + * @param {string} [str=''] - The string token being wrapped. + * @param {object} [metadata={}] - Metadata associated with this token. + */ +lunr.Token = function (str, metadata) { + this.str = str || "" + this.metadata = metadata || {} +} + +/** + * Returns the token string that is being wrapped by this object. + * + * @returns {string} + */ +lunr.Token.prototype.toString = function () { + return this.str +} + +/** + * A token update function is used when updating or optionally + * when cloning a token. + * + * @callback lunr.Token~updateFunction + * @param {string} str - The string representation of the token. + * @param {Object} metadata - All metadata associated with this token. + */ + +/** + * Applies the given function to the wrapped string token. + * + * @example + * token.update(function (str, metadata) { + * return str.toUpperCase() + * }) + * + * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. + * @returns {lunr.Token} + */ +lunr.Token.prototype.update = function (fn) { + this.str = fn(this.str, this.metadata) + return this +} + +/** + * Creates a clone of this token. Optionally a function can be + * applied to the cloned token. + * + * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. + * @returns {lunr.Token} + */ +lunr.Token.prototype.clone = function (fn) { + fn = fn || function (s) { return s } + return new lunr.Token (fn(this.str, this.metadata), this.metadata) +} +/*! + * lunr.tokenizer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A function for splitting a string into tokens ready to be inserted into + * the search index. Uses `lunr.tokenizer.separator` to split strings, change + * the value of this property to change how strings are split into tokens. + * + * This tokenizer will convert its parameter to a string by calling `toString` and + * then will split this string on the character in `lunr.tokenizer.separator`. + * Arrays will have their elements converted to strings and wrapped in a lunr.Token. + * + * Optional metadata can be passed to the tokenizer, this metadata will be cloned and + * added as metadata to every token that is created from the object to be tokenized. + * + * @static + * @param {?(string|object|object[])} obj - The object to convert into tokens + * @param {?object} metadata - Optional metadata to associate with every token + * @returns {lunr.Token[]} + * @see {@link lunr.Pipeline} + */ +lunr.tokenizer = function (obj, metadata) { + if (obj == null || obj == undefined) { + return [] + } + + if (Array.isArray(obj)) { + return obj.map(function (t) { + return new lunr.Token( + lunr.utils.asString(t).toLowerCase(), + lunr.utils.clone(metadata) + ) + }) + } + + var str = obj.toString().toLowerCase(), + len = str.length, + tokens = [] + + for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { + var char = str.charAt(sliceEnd), + sliceLength = sliceEnd - sliceStart + + if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { + + if (sliceLength > 0) { + var tokenMetadata = lunr.utils.clone(metadata) || {} + tokenMetadata["position"] = [sliceStart, sliceLength] + tokenMetadata["index"] = tokens.length + + tokens.push( + new lunr.Token ( + str.slice(sliceStart, sliceEnd), + tokenMetadata + ) + ) + } + + sliceStart = sliceEnd + 1 + } + + } + + return tokens +} + +/** + * The separator used to split a string into tokens. Override this property to change the behaviour of + * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. + * + * @static + * @see lunr.tokenizer + */ +lunr.tokenizer.separator = /[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Pipelines maintain an ordered list of functions to be applied to all + * tokens in documents entering the search index and queries being ran against + * the index. + * + * An instance of lunr.Index created with the lunr shortcut will contain a + * pipeline with a stop word filter and an English language stemmer. Extra + * functions can be added before or after either of these functions or these + * default functions can be removed. + * + * When run the pipeline will call each function in turn, passing a token, the + * index of that token in the original list of all tokens and finally a list of + * all the original tokens. + * + * The output of functions in the pipeline will be passed to the next function + * in the pipeline. To exclude a token from entering the index the function + * should return undefined, the rest of the pipeline will not be called with + * this token. + * + * For serialisation of pipelines to work, all functions used in an instance of + * a pipeline should be registered with lunr.Pipeline. Registered functions can + * then be loaded. If trying to load a serialised pipeline that uses functions + * that are not registered an error will be thrown. + * + * If not planning on serialising the pipeline then registering pipeline functions + * is not necessary. + * + * @constructor + */ +lunr.Pipeline = function () { + this._stack = [] +} + +lunr.Pipeline.registeredFunctions = Object.create(null) + +/** + * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token + * string as well as all known metadata. A pipeline function can mutate the token string + * or mutate (or add) metadata for a given token. + * + * A pipeline function can indicate that the passed token should be discarded by returning + * null, undefined or an empty string. This token will not be passed to any downstream pipeline + * functions and will not be added to the index. + * + * Multiple tokens can be returned by returning an array of tokens. Each token will be passed + * to any downstream pipeline functions and all will returned tokens will be added to the index. + * + * Any number of pipeline functions may be chained together using a lunr.Pipeline. + * + * @interface lunr.PipelineFunction + * @param {lunr.Token} token - A token from the document being processed. + * @param {number} i - The index of this token in the complete list of tokens for this document/field. + * @param {lunr.Token[]} tokens - All tokens for this document/field. + * @returns {(?lunr.Token|lunr.Token[])} + */ + +/** + * Register a function with the pipeline. + * + * Functions that are used in the pipeline should be registered if the pipeline + * needs to be serialised, or a serialised pipeline needs to be loaded. + * + * Registering a function does not add it to a pipeline, functions must still be + * added to instances of the pipeline for them to be used when running a pipeline. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @param {String} label - The label to register this function with + */ +lunr.Pipeline.registerFunction = function (fn, label) { + if (label in this.registeredFunctions) { + lunr.utils.warn('Overwriting existing registered function: ' + label) + } + + fn.label = label + lunr.Pipeline.registeredFunctions[fn.label] = fn +} + +/** + * Warns if the function is not registered as a Pipeline function. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @private + */ +lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { + var isRegistered = fn.label && (fn.label in this.registeredFunctions) + + if (!isRegistered) { + lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) + } +} + +/** + * Loads a previously serialised pipeline. + * + * All functions to be loaded must already be registered with lunr.Pipeline. + * If any function from the serialised data has not been registered then an + * error will be thrown. + * + * @param {Object} serialised - The serialised pipeline to load. + * @returns {lunr.Pipeline} + */ +lunr.Pipeline.load = function (serialised) { + var pipeline = new lunr.Pipeline + + serialised.forEach(function (fnName) { + var fn = lunr.Pipeline.registeredFunctions[fnName] + + if (fn) { + pipeline.add(fn) + } else { + throw new Error('Cannot load unregistered function: ' + fnName) + } + }) + + return pipeline +} + +/** + * Adds new functions to the end of the pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. + */ +lunr.Pipeline.prototype.add = function () { + var fns = Array.prototype.slice.call(arguments) + + fns.forEach(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + this._stack.push(fn) + }, this) +} + +/** + * Adds a single function after a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.after = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + pos = pos + 1 + this._stack.splice(pos, 0, newFn) +} + +/** + * Adds a single function before a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.before = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + this._stack.splice(pos, 0, newFn) +} + +/** + * Removes a function from the pipeline. + * + * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. + */ +lunr.Pipeline.prototype.remove = function (fn) { + var pos = this._stack.indexOf(fn) + if (pos == -1) { + return + } + + this._stack.splice(pos, 1) +} + +/** + * Runs the current list of functions that make up the pipeline against the + * passed tokens. + * + * @param {Array} tokens The tokens to run through the pipeline. + * @returns {Array} + */ +lunr.Pipeline.prototype.run = function (tokens) { + var stackLength = this._stack.length + + for (var i = 0; i < stackLength; i++) { + var fn = this._stack[i] + var memo = [] + + for (var j = 0; j < tokens.length; j++) { + var result = fn(tokens[j], j, tokens) + + if (result === null || result === void 0 || result === '') continue + + if (Array.isArray(result)) { + for (var k = 0; k < result.length; k++) { + memo.push(result[k]) + } + } else { + memo.push(result) + } + } + + tokens = memo + } + + return tokens +} + +/** + * Convenience method for passing a string through a pipeline and getting + * strings out. This method takes care of wrapping the passed string in a + * token and mapping the resulting tokens back to strings. + * + * @param {string} str - The string to pass through the pipeline. + * @param {?object} metadata - Optional metadata to associate with the token + * passed to the pipeline. + * @returns {string[]} + */ +lunr.Pipeline.prototype.runString = function (str, metadata) { + var token = new lunr.Token (str, metadata) + + return this.run([token]).map(function (t) { + return t.toString() + }) +} + +/** + * Resets the pipeline by removing any existing processors. + * + */ +lunr.Pipeline.prototype.reset = function () { + this._stack = [] +} + +/** + * Returns a representation of the pipeline ready for serialisation. + * + * Logs a warning if the function has not been registered. + * + * @returns {Array} + */ +lunr.Pipeline.prototype.toJSON = function () { + return this._stack.map(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + + return fn.label + }) +} +/*! + * lunr.Vector + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A vector is used to construct the vector space of documents and queries. These + * vectors support operations to determine the similarity between two documents or + * a document and a query. + * + * Normally no parameters are required for initializing a vector, but in the case of + * loading a previously dumped vector the raw elements can be provided to the constructor. + * + * For performance reasons vectors are implemented with a flat array, where an elements + * index is immediately followed by its value. E.g. [index, value, index, value]. This + * allows the underlying array to be as sparse as possible and still offer decent + * performance when being used for vector calculations. + * + * @constructor + * @param {Number[]} [elements] - The flat list of element index and element value pairs. + */ +lunr.Vector = function (elements) { + this._magnitude = 0 + this.elements = elements || [] +} + + +/** + * Calculates the position within the vector to insert a given index. + * + * This is used internally by insert and upsert. If there are duplicate indexes then + * the position is returned as if the value for that index were to be updated, but it + * is the callers responsibility to check whether there is a duplicate at that index + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @returns {Number} + */ +lunr.Vector.prototype.positionForIndex = function (index) { + // For an empty vector the tuple can be inserted at the beginning + if (this.elements.length == 0) { + return 0 + } + + var start = 0, + end = this.elements.length / 2, + sliceLength = end - start, + pivotPoint = Math.floor(sliceLength / 2), + pivotIndex = this.elements[pivotPoint * 2] + + while (sliceLength > 1) { + if (pivotIndex < index) { + start = pivotPoint + } + + if (pivotIndex > index) { + end = pivotPoint + } + + if (pivotIndex == index) { + break + } + + sliceLength = end - start + pivotPoint = start + Math.floor(sliceLength / 2) + pivotIndex = this.elements[pivotPoint * 2] + } + + if (pivotIndex == index) { + return pivotPoint * 2 + } + + if (pivotIndex > index) { + return pivotPoint * 2 + } + + if (pivotIndex < index) { + return (pivotPoint + 1) * 2 + } +} + +/** + * Inserts an element at an index within the vector. + * + * Does not allow duplicates, will throw an error if there is already an entry + * for this index. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + */ +lunr.Vector.prototype.insert = function (insertIdx, val) { + this.upsert(insertIdx, val, function () { + throw "duplicate index" + }) +} + +/** + * Inserts or updates an existing index within the vector. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + * @param {function} fn - A function that is called for updates, the existing value and the + * requested value are passed as arguments + */ +lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { + this._magnitude = 0 + var position = this.positionForIndex(insertIdx) + + if (this.elements[position] == insertIdx) { + this.elements[position + 1] = fn(this.elements[position + 1], val) + } else { + this.elements.splice(position, 0, insertIdx, val) + } +} + +/** + * Calculates the magnitude of this vector. + * + * @returns {Number} + */ +lunr.Vector.prototype.magnitude = function () { + if (this._magnitude) return this._magnitude + + var sumOfSquares = 0, + elementsLength = this.elements.length + + for (var i = 1; i < elementsLength; i += 2) { + var val = this.elements[i] + sumOfSquares += val * val + } + + return this._magnitude = Math.sqrt(sumOfSquares) +} + +/** + * Calculates the dot product of this vector and another vector. + * + * @param {lunr.Vector} otherVector - The vector to compute the dot product with. + * @returns {Number} + */ +lunr.Vector.prototype.dot = function (otherVector) { + var dotProduct = 0, + a = this.elements, b = otherVector.elements, + aLen = a.length, bLen = b.length, + aVal = 0, bVal = 0, + i = 0, j = 0 + + while (i < aLen && j < bLen) { + aVal = a[i], bVal = b[j] + if (aVal < bVal) { + i += 2 + } else if (aVal > bVal) { + j += 2 + } else if (aVal == bVal) { + dotProduct += a[i + 1] * b[j + 1] + i += 2 + j += 2 + } + } + + return dotProduct +} + +/** + * Calculates the similarity between this vector and another vector. + * + * @param {lunr.Vector} otherVector - The other vector to calculate the + * similarity with. + * @returns {Number} + */ +lunr.Vector.prototype.similarity = function (otherVector) { + return this.dot(otherVector) / this.magnitude() || 0 +} + +/** + * Converts the vector to an array of the elements within the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toArray = function () { + var output = new Array (this.elements.length / 2) + + for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { + output[j] = this.elements[i] + } + + return output +} + +/** + * A JSON serializable representation of the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toJSON = function () { + return this.elements +} +/* eslint-disable */ +/*! + * lunr.stemmer + * Copyright (C) 2020 Oliver Nightingale + * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt + */ + +/** + * lunr.stemmer is an english language stemmer, this is a JavaScript + * implementation of the PorterStemmer taken from http://tartarus.org/~martin + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token - The string to stem + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + * @function + */ +lunr.stemmer = (function(){ + var step2list = { + "ational" : "ate", + "tional" : "tion", + "enci" : "ence", + "anci" : "ance", + "izer" : "ize", + "bli" : "ble", + "alli" : "al", + "entli" : "ent", + "eli" : "e", + "ousli" : "ous", + "ization" : "ize", + "ation" : "ate", + "ator" : "ate", + "alism" : "al", + "iveness" : "ive", + "fulness" : "ful", + "ousness" : "ous", + "aliti" : "al", + "iviti" : "ive", + "biliti" : "ble", + "logi" : "log" + }, + + step3list = { + "icate" : "ic", + "ative" : "", + "alize" : "al", + "iciti" : "ic", + "ical" : "ic", + "ful" : "", + "ness" : "" + }, + + c = "[^aeiou]", // consonant + v = "[aeiouy]", // vowel + C = c + "[^aeiouy]*", // consonant sequence + V = v + "[aeiou]*", // vowel sequence + + mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 + meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 + mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 + s_v = "^(" + C + ")?" + v; // vowel in stem + + var re_mgr0 = new RegExp(mgr0); + var re_mgr1 = new RegExp(mgr1); + var re_meq1 = new RegExp(meq1); + var re_s_v = new RegExp(s_v); + + var re_1a = /^(.+?)(ss|i)es$/; + var re2_1a = /^(.+?)([^s])s$/; + var re_1b = /^(.+?)eed$/; + var re2_1b = /^(.+?)(ed|ing)$/; + var re_1b_2 = /.$/; + var re2_1b_2 = /(at|bl|iz)$/; + var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); + var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var re_1c = /^(.+?[^aeiou])y$/; + var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + + var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + + var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + var re2_4 = /^(.+?)(s|t)(ion)$/; + + var re_5 = /^(.+?)e$/; + var re_5_1 = /ll$/; + var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var porterStemmer = function porterStemmer(w) { + var stem, + suffix, + firstch, + re, + re2, + re3, + re4; + + if (w.length < 3) { return w; } + + firstch = w.substr(0,1); + if (firstch == "y") { + w = firstch.toUpperCase() + w.substr(1); + } + + // Step 1a + re = re_1a + re2 = re2_1a; + + if (re.test(w)) { w = w.replace(re,"$1$2"); } + else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } + + // Step 1b + re = re_1b; + re2 = re2_1b; + if (re.test(w)) { + var fp = re.exec(w); + re = re_mgr0; + if (re.test(fp[1])) { + re = re_1b_2; + w = w.replace(re,""); + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = re_s_v; + if (re2.test(stem)) { + w = stem; + re2 = re2_1b_2; + re3 = re3_1b_2; + re4 = re4_1b_2; + if (re2.test(w)) { w = w + "e"; } + else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } + else if (re4.test(w)) { w = w + "e"; } + } + } + + // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) + re = re_1c; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + w = stem + "i"; + } + + // Step 2 + re = re_2; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step2list[suffix]; + } + } + + // Step 3 + re = re_3; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step3list[suffix]; + } + } + + // Step 4 + re = re_4; + re2 = re2_4; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + if (re.test(stem)) { + w = stem; + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = re_mgr1; + if (re2.test(stem)) { + w = stem; + } + } + + // Step 5 + re = re_5; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + re2 = re_meq1; + re3 = re3_5; + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { + w = stem; + } + } + + re = re_5_1; + re2 = re_mgr1; + if (re.test(w) && re2.test(w)) { + re = re_1b_2; + w = w.replace(re,""); + } + + // and turn initial Y back to y + + if (firstch == "y") { + w = firstch.toLowerCase() + w.substr(1); + } + + return w; + }; + + return function (token) { + return token.update(porterStemmer); + } +})(); + +lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') +/*! + * lunr.stopWordFilter + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.generateStopWordFilter builds a stopWordFilter function from the provided + * list of stop words. + * + * The built in lunr.stopWordFilter is built using this generator and can be used + * to generate custom stopWordFilters for applications or non English languages. + * + * @function + * @param {Array} token The token to pass through the filter + * @returns {lunr.PipelineFunction} + * @see lunr.Pipeline + * @see lunr.stopWordFilter + */ +lunr.generateStopWordFilter = function (stopWords) { + var words = stopWords.reduce(function (memo, stopWord) { + memo[stopWord] = stopWord + return memo + }, {}) + + return function (token) { + if (token && words[token.toString()] !== token.toString()) return token + } +} + +/** + * lunr.stopWordFilter is an English language stop word list filter, any words + * contained in the list will not be passed through the filter. + * + * This is intended to be used in the Pipeline. If the token does not pass the + * filter then undefined will be returned. + * + * @function + * @implements {lunr.PipelineFunction} + * @params {lunr.Token} token - A token to check for being a stop word. + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + */ +lunr.stopWordFilter = lunr.generateStopWordFilter([ + 'a', + 'able', + 'about', + 'across', + 'after', + 'all', + 'almost', + 'also', + 'am', + 'among', + 'an', + 'and', + 'any', + 'are', + 'as', + 'at', + 'be', + 'because', + 'been', + 'but', + 'by', + 'can', + 'cannot', + 'could', + 'dear', + 'did', + 'do', + 'does', + 'either', + 'else', + 'ever', + 'every', + 'for', + 'from', + 'get', + 'got', + 'had', + 'has', + 'have', + 'he', + 'her', + 'hers', + 'him', + 'his', + 'how', + 'however', + 'i', + 'if', + 'in', + 'into', + 'is', + 'it', + 'its', + 'just', + 'least', + 'let', + 'like', + 'likely', + 'may', + 'me', + 'might', + 'most', + 'must', + 'my', + 'neither', + 'no', + 'nor', + 'not', + 'of', + 'off', + 'often', + 'on', + 'only', + 'or', + 'other', + 'our', + 'own', + 'rather', + 'said', + 'say', + 'says', + 'she', + 'should', + 'since', + 'so', + 'some', + 'than', + 'that', + 'the', + 'their', + 'them', + 'then', + 'there', + 'these', + 'they', + 'this', + 'tis', + 'to', + 'too', + 'twas', + 'us', + 'wants', + 'was', + 'we', + 'were', + 'what', + 'when', + 'where', + 'which', + 'while', + 'who', + 'whom', + 'why', + 'will', + 'with', + 'would', + 'yet', + 'you', + 'your' +]) + +lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') +/*! + * lunr.trimmer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.trimmer is a pipeline function for trimming non word + * characters from the beginning and end of tokens before they + * enter the index. + * + * This implementation may not work correctly for non latin + * characters and should either be removed or adapted for use + * with languages with non-latin characters. + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token The token to pass through the filter + * @returns {lunr.Token} + * @see lunr.Pipeline + */ +lunr.trimmer = function (token) { + return token.update(function (s) { + return s.replace(/^\W+/, '').replace(/\W+$/, '') + }) +} + +lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') +/*! + * lunr.TokenSet + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A token set is used to store the unique list of all tokens + * within an index. Token sets are also used to represent an + * incoming query to the index, this query token set and index + * token set are then intersected to find which tokens to look + * up in the inverted index. + * + * A token set can hold multiple tokens, as in the case of the + * index token set, or it can hold a single token as in the + * case of a simple query token set. + * + * Additionally token sets are used to perform wildcard matching. + * Leading, contained and trailing wildcards are supported, and + * from this edit distance matching can also be provided. + * + * Token sets are implemented as a minimal finite state automata, + * where both common prefixes and suffixes are shared between tokens. + * This helps to reduce the space used for storing the token set. + * + * @constructor + */ +lunr.TokenSet = function () { + this.final = false + this.edges = {} + this.id = lunr.TokenSet._nextId + lunr.TokenSet._nextId += 1 +} + +/** + * Keeps track of the next, auto increment, identifier to assign + * to a new tokenSet. + * + * TokenSets require a unique identifier to be correctly minimised. + * + * @private + */ +lunr.TokenSet._nextId = 1 + +/** + * Creates a TokenSet instance from the given sorted array of words. + * + * @param {String[]} arr - A sorted array of strings to create the set from. + * @returns {lunr.TokenSet} + * @throws Will throw an error if the input array is not sorted. + */ +lunr.TokenSet.fromArray = function (arr) { + var builder = new lunr.TokenSet.Builder + + for (var i = 0, len = arr.length; i < len; i++) { + builder.insert(arr[i]) + } + + builder.finish() + return builder.root +} + +/** + * Creates a token set from a query clause. + * + * @private + * @param {Object} clause - A single clause from lunr.Query. + * @param {string} clause.term - The query clause term. + * @param {number} [clause.editDistance] - The optional edit distance for the term. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromClause = function (clause) { + if ('editDistance' in clause) { + return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) + } else { + return lunr.TokenSet.fromString(clause.term) + } +} + +/** + * Creates a token set representing a single string with a specified + * edit distance. + * + * Insertions, deletions, substitutions and transpositions are each + * treated as an edit distance of 1. + * + * Increasing the allowed edit distance will have a dramatic impact + * on the performance of both creating and intersecting these TokenSets. + * It is advised to keep the edit distance less than 3. + * + * @param {string} str - The string to create the token set from. + * @param {number} editDistance - The allowed edit distance to match. + * @returns {lunr.Vector} + */ +lunr.TokenSet.fromFuzzyString = function (str, editDistance) { + var root = new lunr.TokenSet + + var stack = [{ + node: root, + editsRemaining: editDistance, + str: str + }] + + while (stack.length) { + var frame = stack.pop() + + // no edit + if (frame.str.length > 0) { + var char = frame.str.charAt(0), + noEditNode + + if (char in frame.node.edges) { + noEditNode = frame.node.edges[char] + } else { + noEditNode = new lunr.TokenSet + frame.node.edges[char] = noEditNode + } + + if (frame.str.length == 1) { + noEditNode.final = true + } + + stack.push({ + node: noEditNode, + editsRemaining: frame.editsRemaining, + str: frame.str.slice(1) + }) + } + + if (frame.editsRemaining == 0) { + continue + } + + // insertion + if ("*" in frame.node.edges) { + var insertionNode = frame.node.edges["*"] + } else { + var insertionNode = new lunr.TokenSet + frame.node.edges["*"] = insertionNode + } + + if (frame.str.length == 0) { + insertionNode.final = true + } + + stack.push({ + node: insertionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str + }) + + // deletion + // can only do a deletion if we have enough edits remaining + // and if there are characters left to delete in the string + if (frame.str.length > 1) { + stack.push({ + node: frame.node, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // deletion + // just removing the last character from the str + if (frame.str.length == 1) { + frame.node.final = true + } + + // substitution + // can only do a substitution if we have enough edits remaining + // and if there are characters left to substitute + if (frame.str.length >= 1) { + if ("*" in frame.node.edges) { + var substitutionNode = frame.node.edges["*"] + } else { + var substitutionNode = new lunr.TokenSet + frame.node.edges["*"] = substitutionNode + } + + if (frame.str.length == 1) { + substitutionNode.final = true + } + + stack.push({ + node: substitutionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // transposition + // can only do a transposition if there are edits remaining + // and there are enough characters to transpose + if (frame.str.length > 1) { + var charA = frame.str.charAt(0), + charB = frame.str.charAt(1), + transposeNode + + if (charB in frame.node.edges) { + transposeNode = frame.node.edges[charB] + } else { + transposeNode = new lunr.TokenSet + frame.node.edges[charB] = transposeNode + } + + if (frame.str.length == 1) { + transposeNode.final = true + } + + stack.push({ + node: transposeNode, + editsRemaining: frame.editsRemaining - 1, + str: charA + frame.str.slice(2) + }) + } + } + + return root +} + +/** + * Creates a TokenSet from a string. + * + * The string may contain one or more wildcard characters (*) + * that will allow wildcard matching when intersecting with + * another TokenSet. + * + * @param {string} str - The string to create a TokenSet from. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromString = function (str) { + var node = new lunr.TokenSet, + root = node + + /* + * Iterates through all characters within the passed string + * appending a node for each character. + * + * When a wildcard character is found then a self + * referencing edge is introduced to continually match + * any number of any characters. + */ + for (var i = 0, len = str.length; i < len; i++) { + var char = str[i], + final = (i == len - 1) + + if (char == "*") { + node.edges[char] = node + node.final = final + + } else { + var next = new lunr.TokenSet + next.final = final + + node.edges[char] = next + node = next + } + } + + return root +} + +/** + * Converts this TokenSet into an array of strings + * contained within the TokenSet. + * + * This is not intended to be used on a TokenSet that + * contains wildcards, in these cases the results are + * undefined and are likely to cause an infinite loop. + * + * @returns {string[]} + */ +lunr.TokenSet.prototype.toArray = function () { + var words = [] + + var stack = [{ + prefix: "", + node: this + }] + + while (stack.length) { + var frame = stack.pop(), + edges = Object.keys(frame.node.edges), + len = edges.length + + if (frame.node.final) { + /* In Safari, at this point the prefix is sometimes corrupted, see: + * https://github.com/olivernn/lunr.js/issues/279 Calling any + * String.prototype method forces Safari to "cast" this string to what + * it's supposed to be, fixing the bug. */ + frame.prefix.charAt(0) + words.push(frame.prefix) + } + + for (var i = 0; i < len; i++) { + var edge = edges[i] + + stack.push({ + prefix: frame.prefix.concat(edge), + node: frame.node.edges[edge] + }) + } + } + + return words +} + +/** + * Generates a string representation of a TokenSet. + * + * This is intended to allow TokenSets to be used as keys + * in objects, largely to aid the construction and minimisation + * of a TokenSet. As such it is not designed to be a human + * friendly representation of the TokenSet. + * + * @returns {string} + */ +lunr.TokenSet.prototype.toString = function () { + // NOTE: Using Object.keys here as this.edges is very likely + // to enter 'hash-mode' with many keys being added + // + // avoiding a for-in loop here as it leads to the function + // being de-optimised (at least in V8). From some simple + // benchmarks the performance is comparable, but allowing + // V8 to optimize may mean easy performance wins in the future. + + if (this._str) { + return this._str + } + + var str = this.final ? '1' : '0', + labels = Object.keys(this.edges).sort(), + len = labels.length + + for (var i = 0; i < len; i++) { + var label = labels[i], + node = this.edges[label] + + str = str + label + node.id + } + + return str +} + +/** + * Returns a new TokenSet that is the intersection of + * this TokenSet and the passed TokenSet. + * + * This intersection will take into account any wildcards + * contained within the TokenSet. + * + * @param {lunr.TokenSet} b - An other TokenSet to intersect with. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.prototype.intersect = function (b) { + var output = new lunr.TokenSet, + frame = undefined + + var stack = [{ + qNode: b, + output: output, + node: this + }] + + while (stack.length) { + frame = stack.pop() + + // NOTE: As with the #toString method, we are using + // Object.keys and a for loop instead of a for-in loop + // as both of these objects enter 'hash' mode, causing + // the function to be de-optimised in V8 + var qEdges = Object.keys(frame.qNode.edges), + qLen = qEdges.length, + nEdges = Object.keys(frame.node.edges), + nLen = nEdges.length + + for (var q = 0; q < qLen; q++) { + var qEdge = qEdges[q] + + for (var n = 0; n < nLen; n++) { + var nEdge = nEdges[n] + + if (nEdge == qEdge || qEdge == '*') { + var node = frame.node.edges[nEdge], + qNode = frame.qNode.edges[qEdge], + final = node.final && qNode.final, + next = undefined + + if (nEdge in frame.output.edges) { + // an edge already exists for this character + // no need to create a new node, just set the finality + // bit unless this node is already final + next = frame.output.edges[nEdge] + next.final = next.final || final + + } else { + // no edge exists yet, must create one + // set the finality bit and insert it + // into the output + next = new lunr.TokenSet + next.final = final + frame.output.edges[nEdge] = next + } + + stack.push({ + qNode: qNode, + output: next, + node: node + }) + } + } + } + } + + return output +} +lunr.TokenSet.Builder = function () { + this.previousWord = "" + this.root = new lunr.TokenSet + this.uncheckedNodes = [] + this.minimizedNodes = {} +} + +lunr.TokenSet.Builder.prototype.insert = function (word) { + var node, + commonPrefix = 0 + + if (word < this.previousWord) { + throw new Error ("Out of order word insertion") + } + + for (var i = 0; i < word.length && i < this.previousWord.length; i++) { + if (word[i] != this.previousWord[i]) break + commonPrefix++ + } + + this.minimize(commonPrefix) + + if (this.uncheckedNodes.length == 0) { + node = this.root + } else { + node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child + } + + for (var i = commonPrefix; i < word.length; i++) { + var nextNode = new lunr.TokenSet, + char = word[i] + + node.edges[char] = nextNode + + this.uncheckedNodes.push({ + parent: node, + char: char, + child: nextNode + }) + + node = nextNode + } + + node.final = true + this.previousWord = word +} + +lunr.TokenSet.Builder.prototype.finish = function () { + this.minimize(0) +} + +lunr.TokenSet.Builder.prototype.minimize = function (downTo) { + for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { + var node = this.uncheckedNodes[i], + childKey = node.child.toString() + + if (childKey in this.minimizedNodes) { + node.parent.edges[node.char] = this.minimizedNodes[childKey] + } else { + // Cache the key for this node since + // we know it can't change anymore + node.child._str = childKey + + this.minimizedNodes[childKey] = node.child + } + + this.uncheckedNodes.pop() + } +} +/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * An index contains the built index of all documents and provides a query interface + * to the index. + * + * Usually instances of lunr.Index will not be created using this constructor, instead + * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be + * used to load previously built and serialized indexes. + * + * @constructor + * @param {Object} attrs - The attributes of the built search index. + * @param {Object} attrs.invertedIndex - An index of term/field to document reference. + * @param {Object} attrs.fieldVectors - Field vectors + * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. + * @param {string[]} attrs.fields - The names of indexed document fields. + * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. + */ +lunr.Index = function (attrs) { + this.invertedIndex = attrs.invertedIndex + this.fieldVectors = attrs.fieldVectors + this.tokenSet = attrs.tokenSet + this.fields = attrs.fields + this.pipeline = attrs.pipeline +} + +/** + * A result contains details of a document matching a search query. + * @typedef {Object} lunr.Index~Result + * @property {string} ref - The reference of the document this result represents. + * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. + * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. + */ + +/** + * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple + * query language which itself is parsed into an instance of lunr.Query. + * + * For programmatically building queries it is advised to directly use lunr.Query, the query language + * is best used for human entered text rather than program generated text. + * + * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported + * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' + * or 'world', though those that contain both will rank higher in the results. + * + * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can + * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding + * wildcards will increase the number of documents that will be found but can also have a negative + * impact on query performance, especially with wildcards at the beginning of a term. + * + * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term + * hello in the title field will match this query. Using a field not present in the index will lead + * to an error being thrown. + * + * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term + * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported + * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. + * Avoid large values for edit distance to improve query performance. + * + * Each term also supports a presence modifier. By default a term's presence in document is optional, however + * this can be changed to either required or prohibited. For a term's presence to be required in a document the + * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and + * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not + * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. + * + * To escape special characters the backslash character '\' can be used, this allows searches to include + * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead + * of attempting to apply a boost of 2 to the search term "foo". + * + * @typedef {string} lunr.Index~QueryString + * @example Simple single term query + * hello + * @example Multiple term query + * hello world + * @example term scoped to a field + * title:hello + * @example term with a boost of 10 + * hello^10 + * @example term with an edit distance of 2 + * hello~2 + * @example terms with presence modifiers + * -foo +bar baz + */ + +/** + * Performs a search against the index using lunr query syntax. + * + * Results will be returned sorted by their score, the most relevant results + * will be returned first. For details on how the score is calculated, please see + * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. + * + * For more programmatic querying use lunr.Index#query. + * + * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. + * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.search = function (queryString) { + return this.query(function (query) { + var parser = new lunr.QueryParser(queryString, query) + parser.parse() + }) +} + +/** + * A query builder callback provides a query object to be used to express + * the query to perform on the index. + * + * @callback lunr.Index~queryBuilder + * @param {lunr.Query} query - The query object to build up. + * @this lunr.Query + */ + +/** + * Performs a query against the index using the yielded lunr.Query object. + * + * If performing programmatic queries against the index, this method is preferred + * over lunr.Index#search so as to avoid the additional query parsing overhead. + * + * A query object is yielded to the supplied function which should be used to + * express the query to be run against the index. + * + * Note that although this function takes a callback parameter it is _not_ an + * asynchronous operation, the callback is just yielded a query object to be + * customized. + * + * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.query = function (fn) { + // for each query clause + // * process terms + // * expand terms from token set + // * find matching documents and metadata + // * get document vectors + // * score documents + + var query = new lunr.Query(this.fields), + matchingFields = Object.create(null), + queryVectors = Object.create(null), + termFieldCache = Object.create(null), + requiredMatches = Object.create(null), + prohibitedMatches = Object.create(null) + + /* + * To support field level boosts a query vector is created per + * field. An empty vector is eagerly created to support negated + * queries. + */ + for (var i = 0; i < this.fields.length; i++) { + queryVectors[this.fields[i]] = new lunr.Vector + } + + fn.call(query, query) + + for (var i = 0; i < query.clauses.length; i++) { + /* + * Unless the pipeline has been disabled for this term, which is + * the case for terms with wildcards, we need to pass the clause + * term through the search pipeline. A pipeline returns an array + * of processed terms. Pipeline functions may expand the passed + * term, which means we may end up performing multiple index lookups + * for a single query term. + */ + var clause = query.clauses[i], + terms = null, + clauseMatches = lunr.Set.empty + + if (clause.usePipeline) { + terms = this.pipeline.runString(clause.term, { + fields: clause.fields + }) + } else { + terms = [clause.term] + } + + for (var m = 0; m < terms.length; m++) { + var term = terms[m] + + /* + * Each term returned from the pipeline needs to use the same query + * clause object, e.g. the same boost and or edit distance. The + * simplest way to do this is to re-use the clause object but mutate + * its term property. + */ + clause.term = term + + /* + * From the term in the clause we create a token set which will then + * be used to intersect the indexes token set to get a list of terms + * to lookup in the inverted index + */ + var termTokenSet = lunr.TokenSet.fromClause(clause), + expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() + + /* + * If a term marked as required does not exist in the tokenSet it is + * impossible for the search to return any matches. We set all the field + * scoped required matches set to empty and stop examining any further + * clauses. + */ + if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = lunr.Set.empty + } + + break + } + + for (var j = 0; j < expandedTerms.length; j++) { + /* + * For each term get the posting and termIndex, this is required for + * building the query vector. + */ + var expandedTerm = expandedTerms[j], + posting = this.invertedIndex[expandedTerm], + termIndex = posting._index + + for (var k = 0; k < clause.fields.length; k++) { + /* + * For each field that this query term is scoped by (by default + * all fields are in scope) we need to get all the document refs + * that have this term in that field. + * + * The posting is the entry in the invertedIndex for the matching + * term from above. + */ + var field = clause.fields[k], + fieldPosting = posting[field], + matchingDocumentRefs = Object.keys(fieldPosting), + termField = expandedTerm + "/" + field, + matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) + + /* + * if the presence of this term is required ensure that the matching + * documents are added to the set of required matches for this clause. + * + */ + if (clause.presence == lunr.Query.presence.REQUIRED) { + clauseMatches = clauseMatches.union(matchingDocumentsSet) + + if (requiredMatches[field] === undefined) { + requiredMatches[field] = lunr.Set.complete + } + } + + /* + * if the presence of this term is prohibited ensure that the matching + * documents are added to the set of prohibited matches for this field, + * creating that set if it does not yet exist. + */ + if (clause.presence == lunr.Query.presence.PROHIBITED) { + if (prohibitedMatches[field] === undefined) { + prohibitedMatches[field] = lunr.Set.empty + } + + prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) + + /* + * Prohibited matches should not be part of the query vector used for + * similarity scoring and no metadata should be extracted so we continue + * to the next field + */ + continue + } + + /* + * The query field vector is populated using the termIndex found for + * the term and a unit value with the appropriate boost applied. + * Using upsert because there could already be an entry in the vector + * for the term we are working with. In that case we just add the scores + * together. + */ + queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) + + /** + * If we've already seen this term, field combo then we've already collected + * the matching documents and metadata, no need to go through all that again + */ + if (termFieldCache[termField]) { + continue + } + + for (var l = 0; l < matchingDocumentRefs.length; l++) { + /* + * All metadata for this term/field/document triple + * are then extracted and collected into an instance + * of lunr.MatchData ready to be returned in the query + * results + */ + var matchingDocumentRef = matchingDocumentRefs[l], + matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), + metadata = fieldPosting[matchingDocumentRef], + fieldMatch + + if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { + matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) + } else { + fieldMatch.add(expandedTerm, field, metadata) + } + + } + + termFieldCache[termField] = true + } + } + } + + /** + * If the presence was required we need to update the requiredMatches field sets. + * We do this after all fields for the term have collected their matches because + * the clause terms presence is required in _any_ of the fields not _all_ of the + * fields. + */ + if (clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) + } + } + } + + /** + * Need to combine the field scoped required and prohibited + * matching documents into a global set of required and prohibited + * matches + */ + var allRequiredMatches = lunr.Set.complete, + allProhibitedMatches = lunr.Set.empty + + for (var i = 0; i < this.fields.length; i++) { + var field = this.fields[i] + + if (requiredMatches[field]) { + allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) + } + + if (prohibitedMatches[field]) { + allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) + } + } + + var matchingFieldRefs = Object.keys(matchingFields), + results = [], + matches = Object.create(null) + + /* + * If the query is negated (contains only prohibited terms) + * we need to get _all_ fieldRefs currently existing in the + * index. This is only done when we know that the query is + * entirely prohibited terms to avoid any cost of getting all + * fieldRefs unnecessarily. + * + * Additionally, blank MatchData must be created to correctly + * populate the results. + */ + if (query.isNegated()) { + matchingFieldRefs = Object.keys(this.fieldVectors) + + for (var i = 0; i < matchingFieldRefs.length; i++) { + var matchingFieldRef = matchingFieldRefs[i] + var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) + matchingFields[matchingFieldRef] = new lunr.MatchData + } + } + + for (var i = 0; i < matchingFieldRefs.length; i++) { + /* + * Currently we have document fields that match the query, but we + * need to return documents. The matchData and scores are combined + * from multiple fields belonging to the same document. + * + * Scores are calculated by field, using the query vectors created + * above, and combined into a final document score using addition. + */ + var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), + docRef = fieldRef.docRef + + if (!allRequiredMatches.contains(docRef)) { + continue + } + + if (allProhibitedMatches.contains(docRef)) { + continue + } + + var fieldVector = this.fieldVectors[fieldRef], + score = queryVectors[fieldRef.fieldName].similarity(fieldVector), + docMatch + + if ((docMatch = matches[docRef]) !== undefined) { + docMatch.score += score + docMatch.matchData.combine(matchingFields[fieldRef]) + } else { + var match = { + ref: docRef, + score: score, + matchData: matchingFields[fieldRef] + } + matches[docRef] = match + results.push(match) + } + } + + /* + * Sort the results objects by score, highest first. + */ + return results.sort(function (a, b) { + return b.score - a.score + }) +} + +/** + * Prepares the index for JSON serialization. + * + * The schema for this JSON blob will be described in a + * separate JSON schema file. + * + * @returns {Object} + */ +lunr.Index.prototype.toJSON = function () { + var invertedIndex = Object.keys(this.invertedIndex) + .sort() + .map(function (term) { + return [term, this.invertedIndex[term]] + }, this) + + var fieldVectors = Object.keys(this.fieldVectors) + .map(function (ref) { + return [ref, this.fieldVectors[ref].toJSON()] + }, this) + + return { + version: lunr.version, + fields: this.fields, + fieldVectors: fieldVectors, + invertedIndex: invertedIndex, + pipeline: this.pipeline.toJSON() + } +} + +/** + * Loads a previously serialized lunr.Index + * + * @param {Object} serializedIndex - A previously serialized lunr.Index + * @returns {lunr.Index} + */ +lunr.Index.load = function (serializedIndex) { + var attrs = {}, + fieldVectors = {}, + serializedVectors = serializedIndex.fieldVectors, + invertedIndex = Object.create(null), + serializedInvertedIndex = serializedIndex.invertedIndex, + tokenSetBuilder = new lunr.TokenSet.Builder, + pipeline = lunr.Pipeline.load(serializedIndex.pipeline) + + if (serializedIndex.version != lunr.version) { + lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") + } + + for (var i = 0; i < serializedVectors.length; i++) { + var tuple = serializedVectors[i], + ref = tuple[0], + elements = tuple[1] + + fieldVectors[ref] = new lunr.Vector(elements) + } + + for (var i = 0; i < serializedInvertedIndex.length; i++) { + var tuple = serializedInvertedIndex[i], + term = tuple[0], + posting = tuple[1] + + tokenSetBuilder.insert(term) + invertedIndex[term] = posting + } + + tokenSetBuilder.finish() + + attrs.fields = serializedIndex.fields + + attrs.fieldVectors = fieldVectors + attrs.invertedIndex = invertedIndex + attrs.tokenSet = tokenSetBuilder.root + attrs.pipeline = pipeline + + return new lunr.Index(attrs) +} +/*! + * lunr.Builder + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Builder performs indexing on a set of documents and + * returns instances of lunr.Index ready for querying. + * + * All configuration of the index is done via the builder, the + * fields to index, the document reference, the text processing + * pipeline and document scoring parameters are all set on the + * builder before indexing. + * + * @constructor + * @property {string} _ref - Internal reference to the document reference field. + * @property {string[]} _fields - Internal reference to the document fields to index. + * @property {object} invertedIndex - The inverted index maps terms to document fields. + * @property {object} documentTermFrequencies - Keeps track of document term frequencies. + * @property {object} documentLengths - Keeps track of the length of documents added to the index. + * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. + * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. + * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. + * @property {number} documentCount - Keeps track of the total number of documents indexed. + * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. + * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. + * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. + * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. + */ +lunr.Builder = function () { + this._ref = "id" + this._fields = Object.create(null) + this._documents = Object.create(null) + this.invertedIndex = Object.create(null) + this.fieldTermFrequencies = {} + this.fieldLengths = {} + this.tokenizer = lunr.tokenizer + this.pipeline = new lunr.Pipeline + this.searchPipeline = new lunr.Pipeline + this.documentCount = 0 + this._b = 0.75 + this._k1 = 1.2 + this.termIndex = 0 + this.metadataWhitelist = [] +} + +/** + * Sets the document field used as the document reference. Every document must have this field. + * The type of this field in the document should be a string, if it is not a string it will be + * coerced into a string by calling toString. + * + * The default ref is 'id'. + * + * The ref should _not_ be changed during indexing, it should be set before any documents are + * added to the index. Changing it during indexing can lead to inconsistent results. + * + * @param {string} ref - The name of the reference field in the document. + */ +lunr.Builder.prototype.ref = function (ref) { + this._ref = ref +} + +/** + * A function that is used to extract a field from a document. + * + * Lunr expects a field to be at the top level of a document, if however the field + * is deeply nested within a document an extractor function can be used to extract + * the right field for indexing. + * + * @callback fieldExtractor + * @param {object} doc - The document being added to the index. + * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. + * @example Extracting a nested field + * function (doc) { return doc.nested.field } + */ + +/** + * Adds a field to the list of document fields that will be indexed. Every document being + * indexed should have this field. Null values for this field in indexed documents will + * not cause errors but will limit the chance of that document being retrieved by searches. + * + * All fields should be added before adding documents to the index. Adding fields after + * a document has been indexed will have no effect on already indexed documents. + * + * Fields can be boosted at build time. This allows terms within that field to have more + * importance when ranking search results. Use a field boost to specify that matches within + * one field are more important than other fields. + * + * @param {string} fieldName - The name of a field to index in all documents. + * @param {object} attributes - Optional attributes associated with this field. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. + * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. + * @throws {RangeError} fieldName cannot contain unsupported characters '/' + */ +lunr.Builder.prototype.field = function (fieldName, attributes) { + if (/\//.test(fieldName)) { + throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") + } + + this._fields[fieldName] = attributes || {} +} + +/** + * A parameter to tune the amount of field length normalisation that is applied when + * calculating relevance scores. A value of 0 will completely disable any normalisation + * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b + * will be clamped to the range 0 - 1. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.b = function (number) { + if (number < 0) { + this._b = 0 + } else if (number > 1) { + this._b = 1 + } else { + this._b = number + } +} + +/** + * A parameter that controls the speed at which a rise in term frequency results in term + * frequency saturation. The default value is 1.2. Setting this to a higher value will give + * slower saturation levels, a lower value will result in quicker saturation. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.k1 = function (number) { + this._k1 = number +} + +/** + * Adds a document to the index. + * + * Before adding fields to the index the index should have been fully setup, with the document + * ref and all fields to index already having been specified. + * + * The document must have a field name as specified by the ref (by default this is 'id') and + * it should have all fields defined for indexing, though null or undefined values will not + * cause errors. + * + * Entire documents can be boosted at build time. Applying a boost to a document indicates that + * this document should rank higher in search results than other documents. + * + * @param {object} doc - The document to add to the index. + * @param {object} attributes - Optional attributes associated with this document. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. + */ +lunr.Builder.prototype.add = function (doc, attributes) { + var docRef = doc[this._ref], + fields = Object.keys(this._fields) + + this._documents[docRef] = attributes || {} + this.documentCount += 1 + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i], + extractor = this._fields[fieldName].extractor, + field = extractor ? extractor(doc) : doc[fieldName], + tokens = this.tokenizer(field, { + fields: [fieldName] + }), + terms = this.pipeline.run(tokens), + fieldRef = new lunr.FieldRef (docRef, fieldName), + fieldTerms = Object.create(null) + + this.fieldTermFrequencies[fieldRef] = fieldTerms + this.fieldLengths[fieldRef] = 0 + + // store the length of this field for this document + this.fieldLengths[fieldRef] += terms.length + + // calculate term frequencies for this field + for (var j = 0; j < terms.length; j++) { + var term = terms[j] + + if (fieldTerms[term] == undefined) { + fieldTerms[term] = 0 + } + + fieldTerms[term] += 1 + + // add to inverted index + // create an initial posting if one doesn't exist + if (this.invertedIndex[term] == undefined) { + var posting = Object.create(null) + posting["_index"] = this.termIndex + this.termIndex += 1 + + for (var k = 0; k < fields.length; k++) { + posting[fields[k]] = Object.create(null) + } + + this.invertedIndex[term] = posting + } + + // add an entry for this term/fieldName/docRef to the invertedIndex + if (this.invertedIndex[term][fieldName][docRef] == undefined) { + this.invertedIndex[term][fieldName][docRef] = Object.create(null) + } + + // store all whitelisted metadata about this token in the + // inverted index + for (var l = 0; l < this.metadataWhitelist.length; l++) { + var metadataKey = this.metadataWhitelist[l], + metadata = term.metadata[metadataKey] + + if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { + this.invertedIndex[term][fieldName][docRef][metadataKey] = [] + } + + this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) + } + } + + } +} + +/** + * Calculates the average document length for this index + * + * @private + */ +lunr.Builder.prototype.calculateAverageFieldLengths = function () { + + var fieldRefs = Object.keys(this.fieldLengths), + numberOfFields = fieldRefs.length, + accumulator = {}, + documentsWithField = {} + + for (var i = 0; i < numberOfFields; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + field = fieldRef.fieldName + + documentsWithField[field] || (documentsWithField[field] = 0) + documentsWithField[field] += 1 + + accumulator[field] || (accumulator[field] = 0) + accumulator[field] += this.fieldLengths[fieldRef] + } + + var fields = Object.keys(this._fields) + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i] + accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] + } + + this.averageFieldLength = accumulator +} + +/** + * Builds a vector space model of every document using lunr.Vector + * + * @private + */ +lunr.Builder.prototype.createFieldVectors = function () { + var fieldVectors = {}, + fieldRefs = Object.keys(this.fieldTermFrequencies), + fieldRefsLength = fieldRefs.length, + termIdfCache = Object.create(null) + + for (var i = 0; i < fieldRefsLength; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + fieldName = fieldRef.fieldName, + fieldLength = this.fieldLengths[fieldRef], + fieldVector = new lunr.Vector, + termFrequencies = this.fieldTermFrequencies[fieldRef], + terms = Object.keys(termFrequencies), + termsLength = terms.length + + + var fieldBoost = this._fields[fieldName].boost || 1, + docBoost = this._documents[fieldRef.docRef].boost || 1 + + for (var j = 0; j < termsLength; j++) { + var term = terms[j], + tf = termFrequencies[term], + termIndex = this.invertedIndex[term]._index, + idf, score, scoreWithPrecision + + if (termIdfCache[term] === undefined) { + idf = lunr.idf(this.invertedIndex[term], this.documentCount) + termIdfCache[term] = idf + } else { + idf = termIdfCache[term] + } + + score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) + score *= fieldBoost + score *= docBoost + scoreWithPrecision = Math.round(score * 1000) / 1000 + // Converts 1.23456789 to 1.234. + // Reducing the precision so that the vectors take up less + // space when serialised. Doing it now so that they behave + // the same before and after serialisation. Also, this is + // the fastest approach to reducing a number's precision in + // JavaScript. + + fieldVector.insert(termIndex, scoreWithPrecision) + } + + fieldVectors[fieldRef] = fieldVector + } + + this.fieldVectors = fieldVectors +} + +/** + * Creates a token set of all tokens in the index using lunr.TokenSet + * + * @private + */ +lunr.Builder.prototype.createTokenSet = function () { + this.tokenSet = lunr.TokenSet.fromArray( + Object.keys(this.invertedIndex).sort() + ) +} + +/** + * Builds the index, creating an instance of lunr.Index. + * + * This completes the indexing process and should only be called + * once all documents have been added to the index. + * + * @returns {lunr.Index} + */ +lunr.Builder.prototype.build = function () { + this.calculateAverageFieldLengths() + this.createFieldVectors() + this.createTokenSet() + + return new lunr.Index({ + invertedIndex: this.invertedIndex, + fieldVectors: this.fieldVectors, + tokenSet: this.tokenSet, + fields: Object.keys(this._fields), + pipeline: this.searchPipeline + }) +} + +/** + * Applies a plugin to the index builder. + * + * A plugin is a function that is called with the index builder as its context. + * Plugins can be used to customise or extend the behaviour of the index + * in some way. A plugin is just a function, that encapsulated the custom + * behaviour that should be applied when building the index. + * + * The plugin function will be called with the index builder as its argument, additional + * arguments can also be passed when calling use. The function will be called + * with the index builder as its context. + * + * @param {Function} plugin The plugin to apply. + */ +lunr.Builder.prototype.use = function (fn) { + var args = Array.prototype.slice.call(arguments, 1) + args.unshift(this) + fn.apply(this, args) +} +/** + * Contains and collects metadata about a matching document. + * A single instance of lunr.MatchData is returned as part of every + * lunr.Index~Result. + * + * @constructor + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + * @property {object} metadata - A cloned collection of metadata associated with this document. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData = function (term, field, metadata) { + var clonedMetadata = Object.create(null), + metadataKeys = Object.keys(metadata || {}) + + // Cloning the metadata to prevent the original + // being mutated during match data combination. + // Metadata is kept in an array within the inverted + // index so cloning the data can be done with + // Array#slice + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + clonedMetadata[key] = metadata[key].slice() + } + + this.metadata = Object.create(null) + + if (term !== undefined) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = clonedMetadata + } +} + +/** + * An instance of lunr.MatchData will be created for every term that matches a + * document. However only one instance is required in a lunr.Index~Result. This + * method combines metadata from another instance of lunr.MatchData with this + * objects metadata. + * + * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData.prototype.combine = function (otherMatchData) { + var terms = Object.keys(otherMatchData.metadata) + + for (var i = 0; i < terms.length; i++) { + var term = terms[i], + fields = Object.keys(otherMatchData.metadata[term]) + + if (this.metadata[term] == undefined) { + this.metadata[term] = Object.create(null) + } + + for (var j = 0; j < fields.length; j++) { + var field = fields[j], + keys = Object.keys(otherMatchData.metadata[term][field]) + + if (this.metadata[term][field] == undefined) { + this.metadata[term][field] = Object.create(null) + } + + for (var k = 0; k < keys.length; k++) { + var key = keys[k] + + if (this.metadata[term][field][key] == undefined) { + this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] + } else { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) + } + + } + } + } +} + +/** + * Add metadata for a term/field pair to this instance of match data. + * + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + */ +lunr.MatchData.prototype.add = function (term, field, metadata) { + if (!(term in this.metadata)) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = metadata + return + } + + if (!(field in this.metadata[term])) { + this.metadata[term][field] = metadata + return + } + + var metadataKeys = Object.keys(metadata) + + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + + if (key in this.metadata[term][field]) { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) + } else { + this.metadata[term][field][key] = metadata[key] + } + } +} +/** + * A lunr.Query provides a programmatic way of defining queries to be performed + * against a {@link lunr.Index}. + * + * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method + * so the query object is pre-initialized with the right index fields. + * + * @constructor + * @property {lunr.Query~Clause[]} clauses - An array of query clauses. + * @property {string[]} allFields - An array of all available fields in a lunr.Index. + */ +lunr.Query = function (allFields) { + this.clauses = [] + this.allFields = allFields +} + +/** + * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. + * + * This allows wildcards to be added to the beginning and end of a term without having to manually do any string + * concatenation. + * + * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. + * + * @constant + * @default + * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour + * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists + * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with trailing wildcard + * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) + * @example query term with leading and trailing wildcard + * query.term('foo', { + * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING + * }) + */ + +lunr.Query.wildcard = new String ("*") +lunr.Query.wildcard.NONE = 0 +lunr.Query.wildcard.LEADING = 1 +lunr.Query.wildcard.TRAILING = 2 + +/** + * Constants for indicating what kind of presence a term must have in matching documents. + * + * @constant + * @enum {number} + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with required presence + * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) + */ +lunr.Query.presence = { + /** + * Term's presence in a document is optional, this is the default value. + */ + OPTIONAL: 1, + + /** + * Term's presence in a document is required, documents that do not contain + * this term will not be returned. + */ + REQUIRED: 2, + + /** + * Term's presence in a document is prohibited, documents that do contain + * this term will not be returned. + */ + PROHIBITED: 3 +} + +/** + * A single clause in a {@link lunr.Query} contains a term and details on how to + * match that term against a {@link lunr.Index}. + * + * @typedef {Object} lunr.Query~Clause + * @property {string[]} fields - The fields in an index this clause should be matched against. + * @property {number} [boost=1] - Any boost that should be applied when matching this clause. + * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. + * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. + * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. + * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. + */ + +/** + * Adds a {@link lunr.Query~Clause} to this query. + * + * Unless the clause contains the fields to be matched all fields will be matched. In addition + * a default boost of 1 is applied to the clause. + * + * @param {lunr.Query~Clause} clause - The clause to add to this query. + * @see lunr.Query~Clause + * @returns {lunr.Query} + */ +lunr.Query.prototype.clause = function (clause) { + if (!('fields' in clause)) { + clause.fields = this.allFields + } + + if (!('boost' in clause)) { + clause.boost = 1 + } + + if (!('usePipeline' in clause)) { + clause.usePipeline = true + } + + if (!('wildcard' in clause)) { + clause.wildcard = lunr.Query.wildcard.NONE + } + + if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { + clause.term = "*" + clause.term + } + + if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { + clause.term = "" + clause.term + "*" + } + + if (!('presence' in clause)) { + clause.presence = lunr.Query.presence.OPTIONAL + } + + this.clauses.push(clause) + + return this +} + +/** + * A negated query is one in which every clause has a presence of + * prohibited. These queries require some special processing to return + * the expected results. + * + * @returns boolean + */ +lunr.Query.prototype.isNegated = function () { + for (var i = 0; i < this.clauses.length; i++) { + if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { + return false + } + } + + return true +} + +/** + * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} + * to the list of clauses that make up this query. + * + * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion + * to a token or token-like string should be done before calling this method. + * + * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an + * array, each term in the array will share the same options. + * + * @param {object|object[]} term - The term(s) to add to the query. + * @param {object} [options] - Any additional properties to add to the query clause. + * @returns {lunr.Query} + * @see lunr.Query#clause + * @see lunr.Query~Clause + * @example adding a single term to a query + * query.term("foo") + * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard + * query.term("foo", { + * fields: ["title"], + * boost: 10, + * wildcard: lunr.Query.wildcard.TRAILING + * }) + * @example using lunr.tokenizer to convert a string to tokens before using them as terms + * query.term(lunr.tokenizer("foo bar")) + */ +lunr.Query.prototype.term = function (term, options) { + if (Array.isArray(term)) { + term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) + return this + } + + var clause = options || {} + clause.term = term.toString() + + this.clause(clause) + + return this +} +lunr.QueryParseError = function (message, start, end) { + this.name = "QueryParseError" + this.message = message + this.start = start + this.end = end +} + +lunr.QueryParseError.prototype = new Error +lunr.QueryLexer = function (str) { + this.lexemes = [] + this.str = str + this.length = str.length + this.pos = 0 + this.start = 0 + this.escapeCharPositions = [] +} + +lunr.QueryLexer.prototype.run = function () { + var state = lunr.QueryLexer.lexText + + while (state) { + state = state(this) + } +} + +lunr.QueryLexer.prototype.sliceString = function () { + var subSlices = [], + sliceStart = this.start, + sliceEnd = this.pos + + for (var i = 0; i < this.escapeCharPositions.length; i++) { + sliceEnd = this.escapeCharPositions[i] + subSlices.push(this.str.slice(sliceStart, sliceEnd)) + sliceStart = sliceEnd + 1 + } + + subSlices.push(this.str.slice(sliceStart, this.pos)) + this.escapeCharPositions.length = 0 + + return subSlices.join('') +} + +lunr.QueryLexer.prototype.emit = function (type) { + this.lexemes.push({ + type: type, + str: this.sliceString(), + start: this.start, + end: this.pos + }) + + this.start = this.pos +} + +lunr.QueryLexer.prototype.escapeCharacter = function () { + this.escapeCharPositions.push(this.pos - 1) + this.pos += 1 +} + +lunr.QueryLexer.prototype.next = function () { + if (this.pos >= this.length) { + return lunr.QueryLexer.EOS + } + + var char = this.str.charAt(this.pos) + this.pos += 1 + return char +} + +lunr.QueryLexer.prototype.width = function () { + return this.pos - this.start +} + +lunr.QueryLexer.prototype.ignore = function () { + if (this.start == this.pos) { + this.pos += 1 + } + + this.start = this.pos +} + +lunr.QueryLexer.prototype.backup = function () { + this.pos -= 1 +} + +lunr.QueryLexer.prototype.acceptDigitRun = function () { + var char, charCode + + do { + char = this.next() + charCode = char.charCodeAt(0) + } while (charCode > 47 && charCode < 58) + + if (char != lunr.QueryLexer.EOS) { + this.backup() + } +} + +lunr.QueryLexer.prototype.more = function () { + return this.pos < this.length +} + +lunr.QueryLexer.EOS = 'EOS' +lunr.QueryLexer.FIELD = 'FIELD' +lunr.QueryLexer.TERM = 'TERM' +lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' +lunr.QueryLexer.BOOST = 'BOOST' +lunr.QueryLexer.PRESENCE = 'PRESENCE' + +lunr.QueryLexer.lexField = function (lexer) { + lexer.backup() + lexer.emit(lunr.QueryLexer.FIELD) + lexer.ignore() + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexTerm = function (lexer) { + if (lexer.width() > 1) { + lexer.backup() + lexer.emit(lunr.QueryLexer.TERM) + } + + lexer.ignore() + + if (lexer.more()) { + return lunr.QueryLexer.lexText + } +} + +lunr.QueryLexer.lexEditDistance = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexBoost = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.BOOST) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexEOS = function (lexer) { + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } +} + +// This matches the separator used when tokenising fields +// within a document. These should match otherwise it is +// not possible to search for some tokens within a document. +// +// It is possible for the user to change the separator on the +// tokenizer so it _might_ clash with any other of the special +// characters already used within the search string, e.g. :. +// +// This means that it is possible to change the separator in +// such a way that makes some words unsearchable using a search +// string. +lunr.QueryLexer.termSeparator = lunr.tokenizer.separator + +lunr.QueryLexer.lexText = function (lexer) { + while (true) { + var char = lexer.next() + + if (char == lunr.QueryLexer.EOS) { + return lunr.QueryLexer.lexEOS + } + + // Escape character is '\' + if (char.charCodeAt(0) == 92) { + lexer.escapeCharacter() + continue + } + + if (char == ":") { + return lunr.QueryLexer.lexField + } + + if (char == "~") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexEditDistance + } + + if (char == "^") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexBoost + } + + // "+" indicates term presence is required + // checking for length to ensure that only + // leading "+" are considered + if (char == "+" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + // "-" indicates term presence is prohibited + // checking for length to ensure that only + // leading "-" are considered + if (char == "-" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + if (char.match(lunr.QueryLexer.termSeparator)) { + return lunr.QueryLexer.lexTerm + } + } +} + +lunr.QueryParser = function (str, query) { + this.lexer = new lunr.QueryLexer (str) + this.query = query + this.currentClause = {} + this.lexemeIdx = 0 +} + +lunr.QueryParser.prototype.parse = function () { + this.lexer.run() + this.lexemes = this.lexer.lexemes + + var state = lunr.QueryParser.parseClause + + while (state) { + state = state(this) + } + + return this.query +} + +lunr.QueryParser.prototype.peekLexeme = function () { + return this.lexemes[this.lexemeIdx] +} + +lunr.QueryParser.prototype.consumeLexeme = function () { + var lexeme = this.peekLexeme() + this.lexemeIdx += 1 + return lexeme +} + +lunr.QueryParser.prototype.nextClause = function () { + var completedClause = this.currentClause + this.query.clause(completedClause) + this.currentClause = {} +} + +lunr.QueryParser.parseClause = function (parser) { + var lexeme = parser.peekLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.type) { + case lunr.QueryLexer.PRESENCE: + return lunr.QueryParser.parsePresence + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expected either a field or a term, found " + lexeme.type + + if (lexeme.str.length >= 1) { + errorMessage += " with value '" + lexeme.str + "'" + } + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } +} + +lunr.QueryParser.parsePresence = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.str) { + case "-": + parser.currentClause.presence = lunr.Query.presence.PROHIBITED + break + case "+": + parser.currentClause.presence = lunr.Query.presence.REQUIRED + break + default: + var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term or field, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseField = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + if (parser.query.allFields.indexOf(lexeme.str) == -1) { + var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), + errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.fields = [lexeme.str] + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseTerm = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + parser.currentClause.term = lexeme.str.toLowerCase() + + if (lexeme.str.indexOf("*") != -1) { + parser.currentClause.usePipeline = false + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseEditDistance = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var editDistance = parseInt(lexeme.str, 10) + + if (isNaN(editDistance)) { + var errorMessage = "edit distance must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.editDistance = editDistance + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseBoost = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var boost = parseInt(lexeme.str, 10) + + if (isNaN(boost)) { + var errorMessage = "boost must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.boost = boost + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + + /** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ + ;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + root.lunr = factory() + } + }(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return lunr + })) +})(); diff --git a/search/main.js b/search/main.js new file mode 100644 index 0000000..a5e469d --- /dev/null +++ b/search/main.js @@ -0,0 +1,109 @@ +function getSearchTermFromLocation() { + var sPageURL = window.location.search.substring(1); + var sURLVariables = sPageURL.split('&'); + for (var i = 0; i < sURLVariables.length; i++) { + var sParameterName = sURLVariables[i].split('='); + if (sParameterName[0] == 'q') { + return decodeURIComponent(sParameterName[1].replace(/\+/g, '%20')); + } + } +} + +function joinUrl (base, path) { + if (path.substring(0, 1) === "/") { + // path starts with `/`. Thus it is absolute. + return path; + } + if (base.substring(base.length-1) === "/") { + // base ends with `/` + return base + path; + } + return base + "/" + path; +} + +function escapeHtml (value) { + return value.replace(/&/g, '&') + .replace(/"/g, '"') + .replace(//g, '>'); +} + +function formatResult (location, title, summary) { + return ''; +} + +function displayResults (results) { + var search_results = document.getElementById("mkdocs-search-results"); + while (search_results.firstChild) { + search_results.removeChild(search_results.firstChild); + } + if (results.length > 0){ + for (var i=0; i < results.length; i++){ + var result = results[i]; + var html = formatResult(result.location, result.title, result.summary); + search_results.insertAdjacentHTML('beforeend', html); + } + } else { + var noResultsText = search_results.getAttribute('data-no-results-text'); + if (!noResultsText) { + noResultsText = "No results found"; + } + search_results.insertAdjacentHTML('beforeend', '

' + noResultsText + '

'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 0000000..23c0fac --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"HiveMind Community Documentation Welcome to the HiveMind Community Docs! HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system. With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS! HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS , as your central unit. Join Hivemind Matrix chat for general news, support and chit chat","title":"Home"},{"location":"#hivemind-community-documentation","text":"Welcome to the HiveMind Community Docs! HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system. With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS! HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS , as your central unit. Join Hivemind Matrix chat for general news, support and chit chat","title":"HiveMind Community Documentation"},{"location":"00_index/","text":"HiveMind Community Documentation Welcome to the HiveMind Community Docs! HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system. With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS! HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS , as your central unit. Join Hivemind Matrix chat for general news, support and chit chat","title":"HiveMind Community Documentation"},{"location":"00_index/#hivemind-community-documentation","text":"Welcome to the HiveMind Community Docs! HiveMind is a community-developed superset or extension of OpenVoiceOS the open-source voice operating system. With HiveMind, you can extend one (or more, but usually just one!) instance of OpenVoiceOS to as many devices as you want, including devices that can't ordinarily run OpenVoiceOS! HiveMind's developers have successfully connected to OpenVoiceOS from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. OpenVoiceOS itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS , as your central unit. Join Hivemind Matrix chat for general news, support and chit chat","title":"HiveMind Community Documentation"},{"location":"01_quickstart/","text":"Quick Start Guide This guide will help you get started quickly with the HiveMind platform, allowing you to extend your OpenVoiceOS (OVOS) ecosystem across multiple devices, even with low-resource hardware. HiveMind lets you connect lightweight devices as satellites to a central OVOS hub, offering centralized control and fine-grained permissions. \ud83d\ude80 Installation To begin using HiveMind Core, you need to install the hivemind-core package in your OVOS device. This can be done via pip: pip install hivemind-core \ud83d\udef0\ufe0f Adding a Satellite Device Once the server is running, you'll need to add client credentials for each satellite device you want to connect. Run the following command to add a satellite device: hivemind-core add-client The output wi*ll show you important details like: Node ID Friendly Name Access Key Password Encryption Key (deprecated, only used for legacy clients) Provide these credentials on the client devices to enable the connection. \ud83d\udda5\ufe0f Running the HiveMind Server Start the HiveMind server to accept client connections on a specified port: hivemind-core listen --port 5678 The server will now listen for incoming satellite connections. \ud83d\udca1 hivemind-core needs to be running in the same device as OVOS \ud83d\udd11 Permissions HiveMind Core uses a flexible permissions system, where each client's permissions are customizable. By default: Only essential bus messages are allowed. Skills and intents are accessible but can be blacklisted or restricted. You can manage permissions for clients by using commands like allow-msg , blacklist-msg , allow-skill , and blacklist-skill . Example Use Cases: Basic AI Integration : Enable a simple client to send natural language instructions. Custom Permissions : Restrict an IoT device to only communicate with specific message types, such as temperature.set . HiveMind Core Commands Overview Here are the basic commands for managing clients and their permissions: Add a new client : hivemind-core add-client --name \"satellite_1\" --access-key \"mykey123\" --password \"mypass\" List all registered clients : hivemind-core list-clients Start listening for client connections : hivemind-core listen --port 5678 For detailed help on each command, use --help (e.g., hivemind-core add-client --help ).","title":"Quick start"},{"location":"01_quickstart/#quick-start-guide","text":"This guide will help you get started quickly with the HiveMind platform, allowing you to extend your OpenVoiceOS (OVOS) ecosystem across multiple devices, even with low-resource hardware. HiveMind lets you connect lightweight devices as satellites to a central OVOS hub, offering centralized control and fine-grained permissions.","title":"Quick Start Guide"},{"location":"01_quickstart/#installation","text":"To begin using HiveMind Core, you need to install the hivemind-core package in your OVOS device. This can be done via pip: pip install hivemind-core","title":"\ud83d\ude80 Installation"},{"location":"01_quickstart/#adding-a-satellite-device","text":"Once the server is running, you'll need to add client credentials for each satellite device you want to connect. Run the following command to add a satellite device: hivemind-core add-client The output wi*ll show you important details like: Node ID Friendly Name Access Key Password Encryption Key (deprecated, only used for legacy clients) Provide these credentials on the client devices to enable the connection.","title":"\ud83d\udef0\ufe0f Adding a Satellite Device"},{"location":"01_quickstart/#running-the-hivemind-server","text":"Start the HiveMind server to accept client connections on a specified port: hivemind-core listen --port 5678 The server will now listen for incoming satellite connections. \ud83d\udca1 hivemind-core needs to be running in the same device as OVOS","title":"\ud83d\udda5\ufe0f Running the HiveMind Server"},{"location":"01_quickstart/#permissions","text":"HiveMind Core uses a flexible permissions system, where each client's permissions are customizable. By default: Only essential bus messages are allowed. Skills and intents are accessible but can be blacklisted or restricted. You can manage permissions for clients by using commands like allow-msg , blacklist-msg , allow-skill , and blacklist-skill .","title":"\ud83d\udd11 Permissions"},{"location":"01_quickstart/#example-use-cases","text":"Basic AI Integration : Enable a simple client to send natural language instructions. Custom Permissions : Restrict an IoT device to only communicate with specific message types, such as temperature.set .","title":"Example Use Cases:"},{"location":"01_quickstart/#hivemind-core-commands-overview","text":"Here are the basic commands for managing clients and their permissions: Add a new client : hivemind-core add-client --name \"satellite_1\" --access-key \"mykey123\" --password \"mypass\" List all registered clients : hivemind-core list-clients Start listening for client connections : hivemind-core listen --port 5678 For detailed help on each command, use --help (e.g., hivemind-core add-client --help ).","title":"HiveMind Core Commands Overview"},{"location":"02_terminology/","text":"Terminology Before we delve into the depths of the Hivemind Protocol, let's familiarize ourselves with some key terms used within the ecosystem: Node : A device or software client that is part of to the Hivemind network. Mind : A node that actively listens for connections and understands natural language commands. Minds communicate via BUS messages , authenticate other nodes, isolate connections, and authorize individual messages Fakecroft : A mind that imitates ovos-core without actually running it. often only handles a subset of BUS messages , usually only \"speak\" and \"recognizer_loop:utterance\" Terminal : A user-facing node that connects to a mind but doesn't accept connections itself. Bridge : A node that links an external service to a mind. Hive : A collection of interconnected nodes forming a collaborative network. Slave : A mind that connects to another mind and always accepts BUS messages from it. NOTE: A Terminal is like a Slave, but it is NOT a Mind Master Mind : The highest-level node in a hive that is not connected to any other nodes but receives connections from other nodes. The Collective : The collection of all Master Minds in the world","title":"Terminology"},{"location":"02_terminology/#terminology","text":"Before we delve into the depths of the Hivemind Protocol, let's familiarize ourselves with some key terms used within the ecosystem: Node : A device or software client that is part of to the Hivemind network. Mind : A node that actively listens for connections and understands natural language commands. Minds communicate via BUS messages , authenticate other nodes, isolate connections, and authorize individual messages Fakecroft : A mind that imitates ovos-core without actually running it. often only handles a subset of BUS messages , usually only \"speak\" and \"recognizer_loop:utterance\" Terminal : A user-facing node that connects to a mind but doesn't accept connections itself. Bridge : A node that links an external service to a mind. Hive : A collection of interconnected nodes forming a collaborative network. Slave : A mind that connects to another mind and always accepts BUS messages from it. NOTE: A Terminal is like a Slave, but it is NOT a Mind Master Mind : The highest-level node in a hive that is not connected to any other nodes but receives connections from other nodes. The Collective : The collection of all Master Minds in the world","title":"Terminology"},{"location":"03_pairing/","text":"Pairing devices You can register clients in a Mind via command line or via audio Command Line Pairing First, you need to register the satellite devices in the HiveMind server $ hivemind-core add-client Credentials added to database! Node ID: 2 Friendly Name: HiveMind-Node-2 Access Key: 5a9e580a2773a262cbb23fe9759881ff Password: 9b247ca66c7cd2b6388ad49ca504279d Encryption Key: 4185240103de0770 WARNING: Encryption Key is deprecated, only use if your client does not support password And then set the identity file in the satellite device $ hivemind-client set-identity --key 5a9e580a2773a262cbb23fe9759881ff --password 9b247ca66c7cd2b6388ad49ca504279d --host 0.0.0.0 --port 5678 --siteid test identity saved: /home/miro/.config/hivemind/_identity.json check the created identity file if you like $ cat ~/.config/hivemind/_identity.json { \"password\": \"9b247ca66c7cd2b6388ad49ca504279d\", \"access_key\": \"5a9e580a2773a262cbb23fe9759881ff\", \"site_id\": \"test\", \"default_port\": 5678, \"default_master\": \"ws://0.0.0.0\" } test that a connection is possible using the identity file $ hivemind-client test-identity (...) 2024-05-20 21:22:28.003 - OVOS - hivemind_bus_client.client:__init__:112 - INFO - Session ID: 34d75c93-4e65-4ea9-b5f4-87169dcfda01 (...) == Identity successfully connected to HiveMind! If the identity test passed, then your satellite is paired with the Hive! Audio Pairing via GGWave \ud83d\udea7 This feature is a proof-of-concept / work-in-progress Data over sound for HiveMind hivemind-core and hivemind-voice-sat have hivemind-ggwave support pre-requisites: a device with a browser, eg a phone a hivemind-core device with mic and speaker, eg a mark2 a (unpaired) voice satellite device with mic and speaker, eg a raspberry pi all devices need to be in audible range, they each need to be able to listen to sounds emitted by each other workflow: when launching hivemind-core take note of the provided code, eg HMPSWD:ce357a6b59f6b1f9 copy paste the code and emit it via ggwave (see below) the voice satellite will decode the password, generate an access key and send it back via ggwave master adds a client with key + password, send an ack (containing host) via ggwave satellite devices get the ack then connect to received host manually exchanged string via browser with a talking button The Identity File The identity file is a crucial component in the HiveMind ecosystem, as it stores the necessary credentials and settings for a node (device) to connect and communicate within the HiveMind network. This file ensures that the node can authenticate and maintain secure connections with other nodes. While connection parameters can be set at launch time, this file provides a way to reuse them across the whole OS Contents of the identity file The identity file, typically located at ~/.config/hivemind/_identity.json , contains the following information: Field Description name A human-readable label for the node, which is not guaranteed to be unique. password The password used to generate a session AES key for secure communication within the HiveMind network. access_key A unique access key assigned to the node for identification and authentication. site_id An identifier for the physical location or context in which the node is operating. default_port The default port number used to connect to the HiveMind core. default_master The default host (address) of the HiveMind core that the node connects to. public_key The ASCII-encoded public PGP key used for authenticating the node within the HiveMind network. secret_key The path to the private PGP key file, which uniquely identifies the node and proves its identity. By maintaining these details in the identity file, nodes can securely and efficiently participate in the HiveMind network, facilitating a seamless and secure distributed communication environment. If a node needs to securely communicate or authenticate another (that isn't the master) it can do so via the public key. See the section for intercom messages for more details Groups of devices can also be targeted via their site_id , for example, we can propagate a speak message to announce dinner is ready or broadcast a bus message to order all devices in a certain area equipped with a camera to take a picture. Public Key The Public Key in the identity file is part of a PGP key pair used to uniquely identify the node. This key serves several purposes: Unique Node Identification: The PGP key uniquely identifies this node within the HiveMind network, ensuring that each node can be distinctly recognized. Inter-Node Authentication: Nodes can use the PGP key to authenticate each other, providing a layer of security that ensures only authorized nodes can communicate within the network. Network Independence: The PGP key allows nodes to identify each other regardless of the specific HiveMind core (mind) they are connected to. This means that even if nodes switch Minds, they can still recognize and authenticate each other using their PGP keys. Private Key The Private Key is the only way for a node to read a message encrypted with it's corresponding public key. This file must be kept safe and private at all times! By default, the private key is stored in ~/.config/hivemind/HiveMindComs.asc If you believe your private key has been compromised, or simply want to change keys you can use the hivemind-client reset-pgp command","title":"Pairing"},{"location":"03_pairing/#pairing-devices","text":"You can register clients in a Mind via command line or via audio","title":"Pairing devices"},{"location":"03_pairing/#command-line-pairing","text":"First, you need to register the satellite devices in the HiveMind server $ hivemind-core add-client Credentials added to database! Node ID: 2 Friendly Name: HiveMind-Node-2 Access Key: 5a9e580a2773a262cbb23fe9759881ff Password: 9b247ca66c7cd2b6388ad49ca504279d Encryption Key: 4185240103de0770 WARNING: Encryption Key is deprecated, only use if your client does not support password And then set the identity file in the satellite device $ hivemind-client set-identity --key 5a9e580a2773a262cbb23fe9759881ff --password 9b247ca66c7cd2b6388ad49ca504279d --host 0.0.0.0 --port 5678 --siteid test identity saved: /home/miro/.config/hivemind/_identity.json check the created identity file if you like $ cat ~/.config/hivemind/_identity.json { \"password\": \"9b247ca66c7cd2b6388ad49ca504279d\", \"access_key\": \"5a9e580a2773a262cbb23fe9759881ff\", \"site_id\": \"test\", \"default_port\": 5678, \"default_master\": \"ws://0.0.0.0\" } test that a connection is possible using the identity file $ hivemind-client test-identity (...) 2024-05-20 21:22:28.003 - OVOS - hivemind_bus_client.client:__init__:112 - INFO - Session ID: 34d75c93-4e65-4ea9-b5f4-87169dcfda01 (...) == Identity successfully connected to HiveMind! If the identity test passed, then your satellite is paired with the Hive!","title":"Command Line Pairing"},{"location":"03_pairing/#audio-pairing-via-ggwave","text":"\ud83d\udea7 This feature is a proof-of-concept / work-in-progress Data over sound for HiveMind hivemind-core and hivemind-voice-sat have hivemind-ggwave support pre-requisites: a device with a browser, eg a phone a hivemind-core device with mic and speaker, eg a mark2 a (unpaired) voice satellite device with mic and speaker, eg a raspberry pi all devices need to be in audible range, they each need to be able to listen to sounds emitted by each other workflow: when launching hivemind-core take note of the provided code, eg HMPSWD:ce357a6b59f6b1f9 copy paste the code and emit it via ggwave (see below) the voice satellite will decode the password, generate an access key and send it back via ggwave master adds a client with key + password, send an ack (containing host) via ggwave satellite devices get the ack then connect to received host manually exchanged string via browser with a talking button","title":"Audio Pairing via GGWave"},{"location":"03_pairing/#the-identity-file","text":"The identity file is a crucial component in the HiveMind ecosystem, as it stores the necessary credentials and settings for a node (device) to connect and communicate within the HiveMind network. This file ensures that the node can authenticate and maintain secure connections with other nodes. While connection parameters can be set at launch time, this file provides a way to reuse them across the whole OS","title":"The Identity File"},{"location":"03_pairing/#contents-of-the-identity-file","text":"The identity file, typically located at ~/.config/hivemind/_identity.json , contains the following information: Field Description name A human-readable label for the node, which is not guaranteed to be unique. password The password used to generate a session AES key for secure communication within the HiveMind network. access_key A unique access key assigned to the node for identification and authentication. site_id An identifier for the physical location or context in which the node is operating. default_port The default port number used to connect to the HiveMind core. default_master The default host (address) of the HiveMind core that the node connects to. public_key The ASCII-encoded public PGP key used for authenticating the node within the HiveMind network. secret_key The path to the private PGP key file, which uniquely identifies the node and proves its identity. By maintaining these details in the identity file, nodes can securely and efficiently participate in the HiveMind network, facilitating a seamless and secure distributed communication environment. If a node needs to securely communicate or authenticate another (that isn't the master) it can do so via the public key. See the section for intercom messages for more details Groups of devices can also be targeted via their site_id , for example, we can propagate a speak message to announce dinner is ready or broadcast a bus message to order all devices in a certain area equipped with a camera to take a picture.","title":"Contents of the identity file"},{"location":"03_pairing/#public-key","text":"The Public Key in the identity file is part of a PGP key pair used to uniquely identify the node. This key serves several purposes: Unique Node Identification: The PGP key uniquely identifies this node within the HiveMind network, ensuring that each node can be distinctly recognized. Inter-Node Authentication: Nodes can use the PGP key to authenticate each other, providing a layer of security that ensures only authorized nodes can communicate within the network. Network Independence: The PGP key allows nodes to identify each other regardless of the specific HiveMind core (mind) they are connected to. This means that even if nodes switch Minds, they can still recognize and authenticate each other using their PGP keys.","title":"Public Key"},{"location":"03_pairing/#private-key","text":"The Private Key is the only way for a node to read a message encrypted with it's corresponding public key. This file must be kept safe and private at all times! By default, the private key is stored in ~/.config/hivemind/HiveMindComs.asc If you believe your private key has been compromised, or simply want to change keys you can use the hivemind-client reset-pgp command","title":"Private Key"},{"location":"04_plugins/","text":"OVOS Plugins Compatibility Hivemind leverages ovos-plugin-manager , bringing compatibility with hundreds of plugins. \ud83d\udca1 OVOS plugins can be used both on client and server side Plugin Type Description Documentation Microphone Captures voice input Microphone Documentation VAD Voice Activity Detection VAD Documentation WakeWord Detects wake words for interaction WakeWord Documentation STT Speech-to-text (STT) STT Documentation TTS Text-to-speech (TTS) TTS Documentation G2P Grapheme-to-phoneme (G2P), used to simulate mouth movements G2P Documentation Media Playback Enables media playback (e.g., \"play Metallica\") Media Playback Documentation OCP Plugins Provides playback support for URLs (e.g., YouTube) OCP Plugins Documentation Audio Transformers Processes audio before speech-to-text (STT) Audio Transformers Documentation Dialog Transformers Processes text before text-to-speech (TTS) Dialog Transformers Documentation TTS Transformers Processes audio after text-to-speech (TTS) TTS Transformers Documentation PHAL Provides platform-specific support (e.g., Mark 1) PHAL Documentation Client side plugins The tables below illustrates how plugins from the OVOS ecosystem relate to the various satellites and where they should be installed and configured Audio input : Supported Plugins Microphone VAD Wake Word STT HiveMind Voice Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required *) \u2714\ufe0f (Required) HiveMind Voice Relay \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) \ud83d\udce1 (Remote) HiveMind Microphone Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Required) \ud83d\udce1 (Remote) \ud83d\udce1 (Remote) can be skipped with continuous listening mode Audio output : Supported Plugins TTS Media Playback OCP extractors HiveMind Voice Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \ud83d\udce1 (Remote) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Microphone Satellite \ud83d\udce1 (Remote) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) Transformers : Supported Plugins Audio Utterance Metadata Dialog TTS HiveMind Voice Satellite \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \u274c (Unsupported) \ud83d\udea7 (TODO) \ud83d\udea7 (TODO) \ud83d\udea7 (TODO) \u274c (Unsupported) HiveMind Microphone Satellite \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Other : Supported Plugins G2P (mouth movements) PHAL HiveMind Voice Satellite \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \u274c (Unsupported) \u2714\ufe0f (Optional) HiveMind Microphone Satellite \u274c (Unsupported) \u2714\ufe0f (Optional) Server side plugins The tables below illustrates how plugins from the OVOS ecosystem relate to the various server setups and where they should be installed and configured Audio input : Supported Plugins Microphone VAD Wake Word STT Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Audio output : Supported Plugins TTS Media Playback OCP extractors Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u2714\ufe0f (Optional) Hivemind Sound Server \u2714\ufe0f (Required) \u274c (Unsupported) \u2714\ufe0f (Optional) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Transformers : Supported Plugins Audio Utterance Metadata Dialog TTS Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \ud83d\udea7 (TODO) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \ud83d\udea7 (TODO) Hivemind Persona Server \u274c (Unsupported) \ud83d\udea7 (TODO) \u274c (Unsupported) \ud83d\udea7 (TODO) \u274c (Unsupported) Other : Supported Plugins G2P (mouth movements) PHAL Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \u274c (Unsupported) \u274c (Unsupported) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported)","title":"Plugins"},{"location":"04_plugins/#ovos-plugins-compatibility","text":"Hivemind leverages ovos-plugin-manager , bringing compatibility with hundreds of plugins. \ud83d\udca1 OVOS plugins can be used both on client and server side Plugin Type Description Documentation Microphone Captures voice input Microphone Documentation VAD Voice Activity Detection VAD Documentation WakeWord Detects wake words for interaction WakeWord Documentation STT Speech-to-text (STT) STT Documentation TTS Text-to-speech (TTS) TTS Documentation G2P Grapheme-to-phoneme (G2P), used to simulate mouth movements G2P Documentation Media Playback Enables media playback (e.g., \"play Metallica\") Media Playback Documentation OCP Plugins Provides playback support for URLs (e.g., YouTube) OCP Plugins Documentation Audio Transformers Processes audio before speech-to-text (STT) Audio Transformers Documentation Dialog Transformers Processes text before text-to-speech (TTS) Dialog Transformers Documentation TTS Transformers Processes audio after text-to-speech (TTS) TTS Transformers Documentation PHAL Provides platform-specific support (e.g., Mark 1) PHAL Documentation","title":"OVOS Plugins Compatibility"},{"location":"04_plugins/#client-side-plugins","text":"The tables below illustrates how plugins from the OVOS ecosystem relate to the various satellites and where they should be installed and configured Audio input : Supported Plugins Microphone VAD Wake Word STT HiveMind Voice Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required *) \u2714\ufe0f (Required) HiveMind Voice Relay \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) \ud83d\udce1 (Remote) HiveMind Microphone Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Required) \ud83d\udce1 (Remote) \ud83d\udce1 (Remote) can be skipped with continuous listening mode Audio output : Supported Plugins TTS Media Playback OCP extractors HiveMind Voice Satellite \u2714\ufe0f (Required) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \ud83d\udce1 (Remote) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Microphone Satellite \ud83d\udce1 (Remote) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) Transformers : Supported Plugins Audio Utterance Metadata Dialog TTS HiveMind Voice Satellite \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \u274c (Unsupported) \ud83d\udea7 (TODO) \ud83d\udea7 (TODO) \ud83d\udea7 (TODO) \u274c (Unsupported) HiveMind Microphone Satellite \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Other : Supported Plugins G2P (mouth movements) PHAL HiveMind Voice Satellite \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) HiveMind Voice Relay \u274c (Unsupported) \u2714\ufe0f (Optional) HiveMind Microphone Satellite \u274c (Unsupported) \u2714\ufe0f (Optional)","title":"Client side plugins"},{"location":"04_plugins/#server-side-plugins","text":"The tables below illustrates how plugins from the OVOS ecosystem relate to the various server setups and where they should be installed and configured Audio input : Supported Plugins Microphone VAD Wake Word STT Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) \u2714\ufe0f (Required) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Audio output : Supported Plugins TTS Media Playback OCP extractors Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u2714\ufe0f (Optional) Hivemind Sound Server \u2714\ufe0f (Required) \u274c (Unsupported) \u2714\ufe0f (Optional) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Transformers : Supported Plugins Audio Utterance Metadata Dialog TTS Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \ud83d\udea7 (TODO) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \u2714\ufe0f (Optional) \ud83d\udea7 (TODO) Hivemind Persona Server \u274c (Unsupported) \ud83d\udea7 (TODO) \u274c (Unsupported) \ud83d\udea7 (TODO) \u274c (Unsupported) Other : Supported Plugins G2P (mouth movements) PHAL Hivemind Skills Server \u274c (Unsupported) \u274c (Unsupported) Hivemind Sound Server \u274c (Unsupported) \u274c (Unsupported) Hivemind Persona Server \u274c (Unsupported) \u274c (Unsupported)","title":"Server side plugins"},{"location":"04_protocol/","text":"Protocol The HiveMind Protocol enables seamless exchange of information and commands within a distributed network. It defines message types and their handling methods, serving as a transport protocol. While the protocol primarily operates with OpenVoiceOS (OVOS) messages, it is versatile enough to support other payloads. The protocol is categorized into two main roles: Listener Protocol and Client Protocol . Roles and Message Types Listener Protocol Accepts : BUS , SHARED_BUS , PROPAGATE , ESCALATE , INTERCOM Emits : BUS , PROPAGATE , BROADCAST , INTERCOM Client Protocol Accepts : BUS , PROPAGATE , BROADCAST , INTERCOM Emits : BUS , SHARED_BUS , PROPAGATE , ESCALATE , INTERCOM Permissions Permissions are based on a combination of: Access key Allowed Message types Blacklisted Intent types Blacklisted Skill IDs \ud83d\udca1 Use the hivemind-core package to authorize message types or blacklist intents/skills. Example : Allow the \"speak\" message type: $ hivemind-core allow-msg \"speak\" Payload Messages Payload messages encapsulate OpenVoiceOS Message objects, acting as carriers for information or commands. These are the \"cargo\" the HiveMind Protocol transports across the network. Integrations with external AI backends require middleware to process OVOS messages. See hivemind-persona for an example implementation. \u26a0\ufe0f All HiveMind servers are expected to handle natural language queries. At a minimum, the recognizer_loop:utterance OVOS message must be supported. \ud83d\udca1 Use the hivemind-websocket-client package to send a bus message from the command line BUS Message Purpose : Single-hop communication between slaves and masters. Behavior : A master receiving a BUS message checks global whitelists/blacklists and slave permissions. Authorized messages are injected into the master's OVOS-core bus. Direct responses from the master's OVOS-core are forwarded back to the originating slave. Command Line : $ hivemind-client send-mycroft --help Usage: hivemind-client send-mycroft [OPTIONS] send a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit. \ud83d\udca1 Valid payloads for OVOS can be found here Visualization : SHARED_BUS Message Purpose : Passive monitoring of a slave device's OVOS-core bus. Direction : Slave \u2192 Master. Behavior : Requires explicit configuration on the slave device. Similar to BUS , but for observation, not processing. \ud83d\udca1 This feature is typically enabled through the HiveMind Skill . Visualization : INTERCOM Message messages may also be encrypted with a node public_key , this ensures intermediate nodes are unable to read the message contents A encrypted message is a regular hive message, but has the type \"INTERCOM\" and payload {\"ciphertext\": \"XXXXXXX\"} Where \"ciphertext\" can only be decoded by the target Node, not by any intermediary these messages are usually the payload of transport messages such as ESCALATE or PROPAGATE payloads. \ud83d\udca1 Intermediate nodes do not know the contents of the message, nor who the recipient is When a message needs to be sent securely, it is encrypted using the recipient node's public PGP key. This ensures that only the intended recipient, who possesses the corresponding private PGP key, can decrypt the message. After encryption, the message is signed with the sender's private PGP key. This provides authentication and integrity, ensuring that the message has not been tampered with and confirming the sender's identity. Upon receiving an encrypted message, the recipient node attempts to decrypt it using its private PGP key. If successful, the message payload is then processed and emitted internally. the target node public key needs to be known beforehand if you want to send secret messages Transport Messages Transport messages encapsulate another HiveMessage object as their payload. These types are particularly relevant for Nested Hives . BROADCAST Message Purpose : Multi-hop communication from master \u2192 slaves. Behavior : Disseminates messages to all connected slaves. Supports target_site_id for directing messages to specific nodes. Example : A master can make all slaves in site_id: \"kitchen\" speak a specific message. \ud83d\udca1 BROADCAST messages are typically sent by skills running in a hivemind server Visualization : ESCALATE Message Purpose : Multi-hop communication from slave \u2192 master. Behavior : Elevates messages up the authority chain for higher-level processing. Visualization : Command Line : $ hivemind-client escalate --help Usage: hivemind-client escalate [OPTIONS] escalate a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit. PROPAGATE Message Purpose : Multi-hop communication in both directions (master \u2194 slaves). Behavior : Ensures the message is delivered to all relevant nodes. Visualization : Command Line : $ hivemind-client propagate --help Usage: hivemind-client propagate [OPTIONS] propagate a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit. Protocol Features Feature Protocol v0 Protocol v1 JSON serialization \u2705 \u2705 Binary serialization \u274c \u2705 Pre-shared AES key \u2705 \u2705 Password handshake \u274c \u2705 PGP handshake \u274c \u2705 Zlib compression \u274c \u2705 \u26a0\ufe0f Protocol v0 is deprecated ! However some clients (e.g., HiveMind-Js) may not yet support Protocol Version 1.","title":"Transport"},{"location":"04_protocol/#protocol","text":"The HiveMind Protocol enables seamless exchange of information and commands within a distributed network. It defines message types and their handling methods, serving as a transport protocol. While the protocol primarily operates with OpenVoiceOS (OVOS) messages, it is versatile enough to support other payloads. The protocol is categorized into two main roles: Listener Protocol and Client Protocol .","title":"Protocol"},{"location":"04_protocol/#roles-and-message-types","text":"","title":"Roles and Message Types"},{"location":"04_protocol/#listener-protocol","text":"Accepts : BUS , SHARED_BUS , PROPAGATE , ESCALATE , INTERCOM Emits : BUS , PROPAGATE , BROADCAST , INTERCOM","title":"Listener Protocol"},{"location":"04_protocol/#client-protocol","text":"Accepts : BUS , PROPAGATE , BROADCAST , INTERCOM Emits : BUS , SHARED_BUS , PROPAGATE , ESCALATE , INTERCOM","title":"Client Protocol"},{"location":"04_protocol/#permissions","text":"Permissions are based on a combination of: Access key Allowed Message types Blacklisted Intent types Blacklisted Skill IDs \ud83d\udca1 Use the hivemind-core package to authorize message types or blacklist intents/skills. Example : Allow the \"speak\" message type: $ hivemind-core allow-msg \"speak\"","title":"Permissions"},{"location":"04_protocol/#payload-messages","text":"Payload messages encapsulate OpenVoiceOS Message objects, acting as carriers for information or commands. These are the \"cargo\" the HiveMind Protocol transports across the network. Integrations with external AI backends require middleware to process OVOS messages. See hivemind-persona for an example implementation. \u26a0\ufe0f All HiveMind servers are expected to handle natural language queries. At a minimum, the recognizer_loop:utterance OVOS message must be supported. \ud83d\udca1 Use the hivemind-websocket-client package to send a bus message from the command line","title":"Payload Messages"},{"location":"04_protocol/#bus-message","text":"Purpose : Single-hop communication between slaves and masters. Behavior : A master receiving a BUS message checks global whitelists/blacklists and slave permissions. Authorized messages are injected into the master's OVOS-core bus. Direct responses from the master's OVOS-core are forwarded back to the originating slave. Command Line : $ hivemind-client send-mycroft --help Usage: hivemind-client send-mycroft [OPTIONS] send a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit. \ud83d\udca1 Valid payloads for OVOS can be found here Visualization :","title":"BUS Message"},{"location":"04_protocol/#shared_bus-message","text":"Purpose : Passive monitoring of a slave device's OVOS-core bus. Direction : Slave \u2192 Master. Behavior : Requires explicit configuration on the slave device. Similar to BUS , but for observation, not processing. \ud83d\udca1 This feature is typically enabled through the HiveMind Skill . Visualization :","title":"SHARED_BUS Message"},{"location":"04_protocol/#intercom-message","text":"messages may also be encrypted with a node public_key , this ensures intermediate nodes are unable to read the message contents A encrypted message is a regular hive message, but has the type \"INTERCOM\" and payload {\"ciphertext\": \"XXXXXXX\"} Where \"ciphertext\" can only be decoded by the target Node, not by any intermediary these messages are usually the payload of transport messages such as ESCALATE or PROPAGATE payloads. \ud83d\udca1 Intermediate nodes do not know the contents of the message, nor who the recipient is When a message needs to be sent securely, it is encrypted using the recipient node's public PGP key. This ensures that only the intended recipient, who possesses the corresponding private PGP key, can decrypt the message. After encryption, the message is signed with the sender's private PGP key. This provides authentication and integrity, ensuring that the message has not been tampered with and confirming the sender's identity. Upon receiving an encrypted message, the recipient node attempts to decrypt it using its private PGP key. If successful, the message payload is then processed and emitted internally. the target node public key needs to be known beforehand if you want to send secret messages","title":"INTERCOM Message"},{"location":"04_protocol/#transport-messages","text":"Transport messages encapsulate another HiveMessage object as their payload. These types are particularly relevant for Nested Hives .","title":"Transport Messages"},{"location":"04_protocol/#broadcast-message","text":"Purpose : Multi-hop communication from master \u2192 slaves. Behavior : Disseminates messages to all connected slaves. Supports target_site_id for directing messages to specific nodes. Example : A master can make all slaves in site_id: \"kitchen\" speak a specific message. \ud83d\udca1 BROADCAST messages are typically sent by skills running in a hivemind server Visualization :","title":"BROADCAST Message"},{"location":"04_protocol/#escalate-message","text":"Purpose : Multi-hop communication from slave \u2192 master. Behavior : Elevates messages up the authority chain for higher-level processing. Visualization : Command Line : $ hivemind-client escalate --help Usage: hivemind-client escalate [OPTIONS] escalate a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit.","title":"ESCALATE Message"},{"location":"04_protocol/#propagate-message","text":"Purpose : Multi-hop communication in both directions (master \u2194 slaves). Behavior : Ensures the message is delivered to all relevant nodes. Visualization : Command Line : $ hivemind-client propagate --help Usage: hivemind-client propagate [OPTIONS] propagate a single mycroft message Options: --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --siteid TEXT location identifier for message.context (default read from identity file) --msg TEXT ovos message type to inject --payload TEXT ovos message.data json --help Show this message and exit.","title":"PROPAGATE Message"},{"location":"04_protocol/#protocol-features","text":"Feature Protocol v0 Protocol v1 JSON serialization \u2705 \u2705 Binary serialization \u274c \u2705 Pre-shared AES key \u2705 \u2705 Password handshake \u274c \u2705 PGP handshake \u274c \u2705 Zlib compression \u274c \u2705 \u26a0\ufe0f Protocol v0 is deprecated ! However some clients (e.g., HiveMind-Js) may not yet support Protocol Version 1.","title":"Protocol Features"},{"location":"05_presence/","text":"Auto Discovery Hivemind-presence is an utility to enable auto discovery of HiveMind nodes in your network Command line usage $ hivemind-presence --help Usage: hivemind-presence [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: announce Advertise node in the local network scan scan for hivemind nodes in the local network Announce your HiveMind node in your lan via UpnP and Zeroconf $ hivemind-presence announce --help Usage: hivemind-presence announce [OPTIONS] Advertise node in the local network Options: --port INTEGER HiveMind port number (default: 5678) --name TEXT friendly device name (default: HiveMind-Node) --service-type TEXT HiveMind service type (default: HiveMind-websocket) --zeroconf BOOLEAN advertise via zeroconf (default: True) --upnp BOOLEAN advertise via UPNP (default: False) --help Show this message and exit. Scan for HiveMind nodes in your lan via UpnP and Zeroconf $ hivemind-presence scan --help Usage: hivemind-presence scan [OPTIONS] scan for hivemind nodes in the local network Options: --zeroconf BOOLEAN scan via zeroconf (default: True) --upnp BOOLEAN scan via UPNP (default: False) --service-type TEXT HiveMind service type (default: HiveMind-websocket) --help Show this message and exit. $ hivemind-presence scan HiveMind Nodes \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Friendly Name \u2503 Host \u2503 Port \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 living_room \u2502 192.168.1.9 \u2502 5678 \u2502 \u2502 kitchen \u2502 192.168.1.13 \u2502 5678 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2518","title":"Auto Discovery"},{"location":"05_presence/#auto-discovery","text":"Hivemind-presence is an utility to enable auto discovery of HiveMind nodes in your network","title":"Auto Discovery"},{"location":"05_presence/#command-line-usage","text":"$ hivemind-presence --help Usage: hivemind-presence [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: announce Advertise node in the local network scan scan for hivemind nodes in the local network Announce your HiveMind node in your lan via UpnP and Zeroconf $ hivemind-presence announce --help Usage: hivemind-presence announce [OPTIONS] Advertise node in the local network Options: --port INTEGER HiveMind port number (default: 5678) --name TEXT friendly device name (default: HiveMind-Node) --service-type TEXT HiveMind service type (default: HiveMind-websocket) --zeroconf BOOLEAN advertise via zeroconf (default: True) --upnp BOOLEAN advertise via UPNP (default: False) --help Show this message and exit. Scan for HiveMind nodes in your lan via UpnP and Zeroconf $ hivemind-presence scan --help Usage: hivemind-presence scan [OPTIONS] scan for hivemind nodes in the local network Options: --zeroconf BOOLEAN scan via zeroconf (default: True) --upnp BOOLEAN scan via UPNP (default: False) --service-type TEXT HiveMind service type (default: HiveMind-websocket) --help Show this message and exit. $ hivemind-presence scan HiveMind Nodes \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Friendly Name \u2503 Host \u2503 Port \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 living_room \u2502 192.168.1.9 \u2502 5678 \u2502 \u2502 kitchen \u2502 192.168.1.13 \u2502 5678 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2518","title":"Command line usage"},{"location":"06_skills_server/","text":"OpenVoiceOS Skills Server Hivemind-core is the reference integrations with OpenVoiceOS \ud83d\udca1 For a minimal install you only need hivemind-core , ovos-core and ovos-messagebus Install pip install hivemind-core Usage Everything is done via the hivemind-core command, see pairing for more info $ hivemind-core --help Usage: hivemind-core [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: add-client add credentials for a client allow-msg allow message types sent from a client delete-client remove credentials for a client list-clients list clients and credentials listen start listening for HiveMind connections $ hivemind-core listen --help Usage: hivemind-core listen [OPTIONS] start listening for HiveMind connections Options: --host TEXT HiveMind host --port INTEGER HiveMind port number --ssl BOOLEAN use wss:// --cert_dir TEXT HiveMind SSL certificate directory --cert_name TEXT HiveMind SSL certificate file name --db-backend [redis|json|sqlite] Select the database backend to use. Options: redis, sqlite, json. --db-name TEXT [json/sqlite] The name for the database file. ~/.cache/hivemind-core/{name} --db-folder TEXT [json/sqlite] The subfolder where database files are stored. ~/.cache/{db_folder}} --redis-host TEXT [redis] Host for Redis. Default is localhost. --redis-port INTEGER [redis] Port for Redis. Default is 6379. --redis-password TEXT [redis] Password for Redis. Default None --help Show this message and exit. Why HiveMind? HiveMind offers a decentralized solution for OVOS, with features such as secure communication, device integration, and protocol transparency. Here's what it brings to the table: HiveMind as an OVOS Add-on Start with OVOS by installing ovos-core , or use a Mycroft device . Then, run hivemind-core to enable HiveMind functionality. This transforms your OVOS node into a connected system with the \"brain\" of HiveMind. Decentralizing OVOS-Core With HiveMind, thin clients like the voice satellite can connect without running full OVOS software. This allows for multiple access points (e.g., microphones across your home) while keeping the core in a central location. Encrypted Communication HiveMind supports SSL-encrypted communication, eliminating the need for manual certificate management. It auto-generates self-signed certificates for secure, encrypted connections between devices. MessageBus Authentication & Security HiveMind enforces authentication for the message bus, ensuring only authorized clients can connect. This enhances privacy and prevents unauthorized access, unlike traditional setups where the message bus is open. Exposing OVOS to the Web Safely HiveMind can expose your OVOS instance securely over the web. By using the Flask chatroom template , you can interact with OVOS remotely while maintaining privacy and security. Protocol for Integration HiveMind allows integration with external platforms like Android, Mattermost, or Twitch. Whether you want to turn OVOS into a chatbot or integrate it with other services, HiveMind provides the protocol for seamless interaction. Key Features & Setup HiveMind in Action: Devices Connecting: Install the HiveMind CLI and register with your OVOS node to connect devices across your network. Decentralization: Use lightweight devices like Raspberry Pi with HiveMind to extend OVOS functionality across rooms. Encryption & Authentication: Safely transmit data over SSL, with built-in encryption and message authentication. Web Exposure: Use HiveMind\u2019s secure web interface to interact with OVOS remotely. Chat Integrations: Install bridges like the HackChat or Mattermost bridges to bring OVOS to chat platforms. By leveraging HiveMind's features, you can transform OVOS into a flexible, decentralized, and secure platform, capable of handling a wide variety of use cases and integrations.","title":"Skills Server"},{"location":"06_skills_server/#openvoiceos-skills-server","text":"Hivemind-core is the reference integrations with OpenVoiceOS \ud83d\udca1 For a minimal install you only need hivemind-core , ovos-core and ovos-messagebus","title":"OpenVoiceOS Skills Server"},{"location":"06_skills_server/#install","text":"pip install hivemind-core","title":"Install"},{"location":"06_skills_server/#usage","text":"Everything is done via the hivemind-core command, see pairing for more info $ hivemind-core --help Usage: hivemind-core [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: add-client add credentials for a client allow-msg allow message types sent from a client delete-client remove credentials for a client list-clients list clients and credentials listen start listening for HiveMind connections $ hivemind-core listen --help Usage: hivemind-core listen [OPTIONS] start listening for HiveMind connections Options: --host TEXT HiveMind host --port INTEGER HiveMind port number --ssl BOOLEAN use wss:// --cert_dir TEXT HiveMind SSL certificate directory --cert_name TEXT HiveMind SSL certificate file name --db-backend [redis|json|sqlite] Select the database backend to use. Options: redis, sqlite, json. --db-name TEXT [json/sqlite] The name for the database file. ~/.cache/hivemind-core/{name} --db-folder TEXT [json/sqlite] The subfolder where database files are stored. ~/.cache/{db_folder}} --redis-host TEXT [redis] Host for Redis. Default is localhost. --redis-port INTEGER [redis] Port for Redis. Default is 6379. --redis-password TEXT [redis] Password for Redis. Default None --help Show this message and exit.","title":"Usage"},{"location":"06_skills_server/#why-hivemind","text":"HiveMind offers a decentralized solution for OVOS, with features such as secure communication, device integration, and protocol transparency. Here's what it brings to the table: HiveMind as an OVOS Add-on Start with OVOS by installing ovos-core , or use a Mycroft device . Then, run hivemind-core to enable HiveMind functionality. This transforms your OVOS node into a connected system with the \"brain\" of HiveMind. Decentralizing OVOS-Core With HiveMind, thin clients like the voice satellite can connect without running full OVOS software. This allows for multiple access points (e.g., microphones across your home) while keeping the core in a central location. Encrypted Communication HiveMind supports SSL-encrypted communication, eliminating the need for manual certificate management. It auto-generates self-signed certificates for secure, encrypted connections between devices. MessageBus Authentication & Security HiveMind enforces authentication for the message bus, ensuring only authorized clients can connect. This enhances privacy and prevents unauthorized access, unlike traditional setups where the message bus is open. Exposing OVOS to the Web Safely HiveMind can expose your OVOS instance securely over the web. By using the Flask chatroom template , you can interact with OVOS remotely while maintaining privacy and security. Protocol for Integration HiveMind allows integration with external platforms like Android, Mattermost, or Twitch. Whether you want to turn OVOS into a chatbot or integrate it with other services, HiveMind provides the protocol for seamless interaction.","title":"Why HiveMind?"},{"location":"06_skills_server/#key-features-setup","text":"HiveMind in Action: Devices Connecting: Install the HiveMind CLI and register with your OVOS node to connect devices across your network. Decentralization: Use lightweight devices like Raspberry Pi with HiveMind to extend OVOS functionality across rooms. Encryption & Authentication: Safely transmit data over SSL, with built-in encryption and message authentication. Web Exposure: Use HiveMind\u2019s secure web interface to interact with OVOS remotely. Chat Integrations: Install bridges like the HackChat or Mattermost bridges to bring OVOS to chat platforms. By leveraging HiveMind's features, you can transform OVOS into a flexible, decentralized, and secure platform, capable of handling a wide variety of use cases and integrations.","title":"Key Features & Setup"},{"location":"06_sound_server/","text":"HiveMind Sound Server hivemind-listener extends hivemind-core and integrates with ovos-simple-listener , enabling audio-based communication with advanced features for secure, distributed voice assistant functionality . \ud83d\udca1 If you are running a home server this is the best option, you only need to install hivemind-listener , ovos-core and ovos-messagebus . \u26a0\ufe0f If running on a device that is also a full OVOS assistant by itself you should use hivemind-core instead Key Features of HiveMind Listener Audio Stream Handling : Accepts encrypted binary audio streams, performing WakeWord detection , Voice Activity Detection (VAD) , Speech-to-Text (STT) , and Text-to-Speech (TTS) directly on the hivemind-listener instance. (Lightweight clients like hivemind-mic-satellite only run a microphone and VAD plugin.) STT Service : Provides STT via the hivemind-websocket-client , accepting Base64-encoded audio inputs. TTS Service : Provides TTS via the hivemind-websocket-client , returning Base64-encoded audio outputs. Secure Plugin Access : Running TTS/STT via HiveMind Listener requires an access key, offering fine-grained access control compared to non-authenticated server plugins. Usage Install HiveMind Listener : pip install hivemind-listener Start the HiveMind Listener : $ hivemind-listener --help Usage: hivemind-listener [OPTIONS] Run the HiveMind Listener with configurable plugins. If a plugin is not specified, the defaults from mycroft.conf will be used. mycroft.conf will be loaded as usual for plugin settings. Options: --wakeword TEXT Specify the wake word for the listener. Default is 'hey_mycroft'. --stt-plugin TEXT Specify the STT plugin to use. --tts-plugin TEXT Specify the TTS plugin to use. --vad-plugin TEXT Specify the VAD plugin to use. --dialog-transformers TEXT dialog transformer plugins to load. Installed plugins: None --utterance-transformers TEXT utterance transformer plugins to load. Installed plugins: ['ovos-utterance-plugin-cancel'] --metadata-transformers TEXT metadata transformer plugins to load. Installed plugins: None --ovos_bus_address TEXT Open Voice OS bus address --ovos_bus_port INTEGER Open Voice OS bus port number --host TEXT HiveMind host --port INTEGER HiveMind port number --ssl BOOLEAN use wss:// --cert_dir TEXT HiveMind SSL certificate directory --cert_name TEXT HiveMind SSL certificate file name --db-backend [redis|json|sqlite] Select the database backend to use. Options: redis, sqlite, json. --db-name TEXT [json/sqlite] The name for the database file. ~/.cache/hivemind-core/{name} --db-folder TEXT [json/sqlite] The subfolder where database files are stored. ~/.cache/{db_folder}} --redis-host TEXT [redis] Host for Redis. Default is localhost. --redis-port INTEGER [redis] Port for Redis. Default is 6379. --redis-password TEXT [redis] Password for Redis. Default None --help Show this message and exit. This command will run the HiveMind Listener, with configurable plugins for wakeword detection, STT, TTS, and VAD, as well as access control via SSL. Example Use Cases Microphone Satellite : Use hivemind-mic-satellite to stream raw audio to the hivemind-listener . Microphones handle audio capture and VAD, while the Listener manages WakeWord, STT, and TTS processing. Authenticated STT/TTS Services : Connect clients securely using access keys for transcribing or synthesizing audio via the HiveMind Listener, ensuring robust access control.","title":"Sound Server"},{"location":"06_sound_server/#hivemind-sound-server","text":"hivemind-listener extends hivemind-core and integrates with ovos-simple-listener , enabling audio-based communication with advanced features for secure, distributed voice assistant functionality . \ud83d\udca1 If you are running a home server this is the best option, you only need to install hivemind-listener , ovos-core and ovos-messagebus . \u26a0\ufe0f If running on a device that is also a full OVOS assistant by itself you should use hivemind-core instead","title":"HiveMind Sound Server"},{"location":"06_sound_server/#key-features-of-hivemind-listener","text":"Audio Stream Handling : Accepts encrypted binary audio streams, performing WakeWord detection , Voice Activity Detection (VAD) , Speech-to-Text (STT) , and Text-to-Speech (TTS) directly on the hivemind-listener instance. (Lightweight clients like hivemind-mic-satellite only run a microphone and VAD plugin.) STT Service : Provides STT via the hivemind-websocket-client , accepting Base64-encoded audio inputs. TTS Service : Provides TTS via the hivemind-websocket-client , returning Base64-encoded audio outputs. Secure Plugin Access : Running TTS/STT via HiveMind Listener requires an access key, offering fine-grained access control compared to non-authenticated server plugins.","title":"Key Features of HiveMind Listener"},{"location":"06_sound_server/#usage","text":"Install HiveMind Listener : pip install hivemind-listener Start the HiveMind Listener : $ hivemind-listener --help Usage: hivemind-listener [OPTIONS] Run the HiveMind Listener with configurable plugins. If a plugin is not specified, the defaults from mycroft.conf will be used. mycroft.conf will be loaded as usual for plugin settings. Options: --wakeword TEXT Specify the wake word for the listener. Default is 'hey_mycroft'. --stt-plugin TEXT Specify the STT plugin to use. --tts-plugin TEXT Specify the TTS plugin to use. --vad-plugin TEXT Specify the VAD plugin to use. --dialog-transformers TEXT dialog transformer plugins to load. Installed plugins: None --utterance-transformers TEXT utterance transformer plugins to load. Installed plugins: ['ovos-utterance-plugin-cancel'] --metadata-transformers TEXT metadata transformer plugins to load. Installed plugins: None --ovos_bus_address TEXT Open Voice OS bus address --ovos_bus_port INTEGER Open Voice OS bus port number --host TEXT HiveMind host --port INTEGER HiveMind port number --ssl BOOLEAN use wss:// --cert_dir TEXT HiveMind SSL certificate directory --cert_name TEXT HiveMind SSL certificate file name --db-backend [redis|json|sqlite] Select the database backend to use. Options: redis, sqlite, json. --db-name TEXT [json/sqlite] The name for the database file. ~/.cache/hivemind-core/{name} --db-folder TEXT [json/sqlite] The subfolder where database files are stored. ~/.cache/{db_folder}} --redis-host TEXT [redis] Host for Redis. Default is localhost. --redis-port INTEGER [redis] Port for Redis. Default is 6379. --redis-password TEXT [redis] Password for Redis. Default None --help Show this message and exit. This command will run the HiveMind Listener, with configurable plugins for wakeword detection, STT, TTS, and VAD, as well as access control via SSL.","title":"Usage"},{"location":"06_sound_server/#example-use-cases","text":"Microphone Satellite : Use hivemind-mic-satellite to stream raw audio to the hivemind-listener . Microphones handle audio capture and VAD, while the Listener manages WakeWord, STT, and TTS processing. Authenticated STT/TTS Services : Connect clients securely using access keys for transcribing or synthesizing audio via the HiveMind Listener, ensuring robust access control.","title":"Example Use Cases"},{"location":"07_homeassistant/","text":"HomeAssistant Integration \u26a0\ufe0f UNMAINTAINED : currently not functional This component will set up the following actions. Platform Description notify Send a notification to a HiveMind Voice Assistant Install Using HACS install from Github HiveMind setup create credentials and allow the speak message to be injected hivemind side you can authorize message_types via the hivemind-core package $hivemind-core allow-msg \"speak\" Configuration In configuration.yaml : notify: - platform: hivemind name: mark2 host: wss://192.168.1.8 port: 5678 self_signed: True key: a821bc4b34590a002570f0ed7808b886 password: 31e6a37e472a657609ef33f237d5b286 Then call notify.mark2 with a message you'd like the HiveMind Voice Assistant to speak. Manual Installation Using the tool of choice open the directory (folder) for your HA configuration (where you find configuration.yaml ). If you do not have a custom_components directory (folder) there, you need to create it. In the custom_components directory (folder) create a new folder called hivemind . Download all the files from the custom_components/hivemind/ directory (folder) in this repository . Place the files you downloaded in the new directory (folder) you created. Restart Home Assistant In the HA UI go to \"Configuration\" -> \"Integrations\" click \"+\" and search for \"HiveMind Integration\" Using your HA configuration directory (folder) as a starting point you should now also have this: custom_components/hivemind/translations/en.json custom_components/hivemind/__init__.py custom_components/hivemind/const.py custom_components/hivemind/manifest.json custom_components/hivemind/notify.py","title":"Home Assistant"},{"location":"07_homeassistant/#homeassistant-integration","text":"\u26a0\ufe0f UNMAINTAINED : currently not functional This component will set up the following actions. Platform Description notify Send a notification to a HiveMind Voice Assistant","title":"HomeAssistant Integration"},{"location":"07_homeassistant/#install","text":"Using HACS install from Github","title":"Install"},{"location":"07_homeassistant/#hivemind-setup","text":"create credentials and allow the speak message to be injected hivemind side you can authorize message_types via the hivemind-core package $hivemind-core allow-msg \"speak\"","title":"HiveMind setup"},{"location":"07_homeassistant/#configuration","text":"In configuration.yaml : notify: - platform: hivemind name: mark2 host: wss://192.168.1.8 port: 5678 self_signed: True key: a821bc4b34590a002570f0ed7808b886 password: 31e6a37e472a657609ef33f237d5b286 Then call notify.mark2 with a message you'd like the HiveMind Voice Assistant to speak.","title":"Configuration"},{"location":"07_homeassistant/#manual-installation","text":"Using the tool of choice open the directory (folder) for your HA configuration (where you find configuration.yaml ). If you do not have a custom_components directory (folder) there, you need to create it. In the custom_components directory (folder) create a new folder called hivemind . Download all the files from the custom_components/hivemind/ directory (folder) in this repository . Place the files you downloaded in the new directory (folder) you created. Restart Home Assistant In the HA UI go to \"Configuration\" -> \"Integrations\" click \"+\" and search for \"HiveMind Integration\" Using your HA configuration directory (folder) as a starting point you should now also have this: custom_components/hivemind/translations/en.json custom_components/hivemind/__init__.py custom_components/hivemind/const.py custom_components/hivemind/manifest.json custom_components/hivemind/notify.py","title":"Manual Installation"},{"location":"07_micsat/","text":"HiveMind Microphone Satellite OpenVoiceOS Microphone Satellite, connect to HiveMind Listener A super lightweight version of voice-satellite , only Microphone and VAD plugins runs on the mic-satellite, voice activity is streamed to hivemind-listener and all the processing happens there Server requirements \u26a0\ufe0f hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities. Install Install with pip $ pip install hivemind-mic-satellite Configuration Voice relay is built on top of ovos-plugin-manager , it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugins: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD PHAL Platform/Hardware Abstraction Layer No PHAL G2P Generate visemes (mouth movements), eg. for Mk1 No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins This repo needs less resources but it is also missing some features STT plugin (runs on server) TTS plugin (runs on server) WakeWord plugin (runs on server) Continuous Listening Hybrid Listening Recording Mode Sleep Mode Multiple WakeWords Audio Transformers plugins Dialog Transformers plugins (TODO - support in the future)","title":"Mic Satellite"},{"location":"07_micsat/#hivemind-microphone-satellite","text":"OpenVoiceOS Microphone Satellite, connect to HiveMind Listener A super lightweight version of voice-satellite , only Microphone and VAD plugins runs on the mic-satellite, voice activity is streamed to hivemind-listener and all the processing happens there","title":"HiveMind Microphone Satellite"},{"location":"07_micsat/#server-requirements","text":"\u26a0\ufe0f hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities.","title":"Server requirements"},{"location":"07_micsat/#install","text":"Install with pip $ pip install hivemind-mic-satellite","title":"Install"},{"location":"07_micsat/#configuration","text":"Voice relay is built on top of ovos-plugin-manager , it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugins: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD PHAL Platform/Hardware Abstraction Layer No PHAL G2P Generate visemes (mouth movements), eg. for Mk1 No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins This repo needs less resources but it is also missing some features STT plugin (runs on server) TTS plugin (runs on server) WakeWord plugin (runs on server) Continuous Listening Hybrid Listening Recording Mode Sleep Mode Multiple WakeWords Audio Transformers plugins Dialog Transformers plugins (TODO - support in the future)","title":"Configuration"},{"location":"07_voice_relay/","text":"HiveMind Voice Relay OpenVoiceOS Relay, connect to HiveMind A lightweight version of voice-satellite , but STT and TTS are sent to HiveMind instead of handled on device Server requirements \u26a0\ufe0f hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities. Alternatively run hivemind-core together with ovos-audio and ovos-dinkum-listener The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins The voice relay is built on top of ovos-simple-listener , while it needs less resources it is also missing some features STT plugin TTS plugin Audio Transformers plugins Continuous Listening Hybrid Listening Recording Mode Sleep Mode Multiple WakeWords If you need an even lighter implementation, consider hivemind-mic-satellite to also offload wake word to the server Install Install with pip $ pip install HiveMind-voice-relay Usage Usage: hivemind-voice-relay [OPTIONS] connect to HiveMind Options: --host TEXT hivemind host --key TEXT Access Key --password TEXT Password for key derivation --port INTEGER HiveMind port number --selfsigned accept self signed certificates --help Show this message and exit. Configuration Voice relay is built on top of ovos-simple-listener and ovos-audio , it uses the default OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugin types: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD WakeWord Detects wake words for interaction Yes WakeWord G2P grapheme-to-phoneme (G2P), used to simulate mouth movements No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins PHAL Provides platform-specific support (e.g., Mark 1) No PHAL","title":"Voice Relay"},{"location":"07_voice_relay/#hivemind-voice-relay","text":"OpenVoiceOS Relay, connect to HiveMind A lightweight version of voice-satellite , but STT and TTS are sent to HiveMind instead of handled on device","title":"HiveMind Voice Relay"},{"location":"07_voice_relay/#server-requirements","text":"\u26a0\ufe0f hivemind-listener is required server side, the default hivemind-core does not provide STT and TTS capabilities. Alternatively run hivemind-core together with ovos-audio and ovos-dinkum-listener The regular voice satellite is built on top of ovos-dinkum-listener and is full featured supporting all plugins The voice relay is built on top of ovos-simple-listener , while it needs less resources it is also missing some features STT plugin TTS plugin Audio Transformers plugins Continuous Listening Hybrid Listening Recording Mode Sleep Mode Multiple WakeWords If you need an even lighter implementation, consider hivemind-mic-satellite to also offload wake word to the server","title":"Server requirements"},{"location":"07_voice_relay/#install","text":"Install with pip $ pip install HiveMind-voice-relay","title":"Install"},{"location":"07_voice_relay/#usage","text":"Usage: hivemind-voice-relay [OPTIONS] connect to HiveMind Options: --host TEXT hivemind host --key TEXT Access Key --password TEXT Password for key derivation --port INTEGER HiveMind port number --selfsigned accept self signed certificates --help Show this message and exit.","title":"Usage"},{"location":"07_voice_relay/#configuration","text":"Voice relay is built on top of ovos-simple-listener and ovos-audio , it uses the default OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugin types: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD WakeWord Detects wake words for interaction Yes WakeWord G2P grapheme-to-phoneme (G2P), used to simulate mouth movements No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins PHAL Provides platform-specific support (e.g., Mark 1) No PHAL","title":"Configuration"},{"location":"07_voicesat/","text":"HiveMind Voice Satellite OpenVoiceOS Satellite, connect to HiveMind Built on top of ovos-dinkum-listener , ovos-audio and PHAL Install Install dependencies (if needed) sudo apt-get install -y libpulse-dev libasound2-dev Install with pip $ pip install HiveMind-voice-sat Usage Usage: hivemind-voice-sat [OPTIONS] connect to HiveMind Options: --host TEXT hivemind host --key TEXT Access Key --password TEXT Password for key derivation --port INTEGER HiveMind port number --selfsigned accept self signed certificates --help Show this message and exit. Configuration Voice satellite is built on top of ovos-listener and ovos-audio , it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugins: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD WakeWord Detects wake words for interaction Yes* WakeWord STT speech-to-text (STT) Yes STT TTS text-to-speech (TTS) Yes TTS G2P grapheme-to-phoneme (G2P), used to simulate mouth movements No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins Audio Transformers Processes audio before speech-to-text (STT) No Audio Transformers Dialog Transformers Processes text before text-to-speech (TTS) No Dialog Transformers TTS Transformers Processes audio after text-to-speech (TTS) No TTS Transformers PHAL Provides platform-specific support (e.g., Mark 1) No PHAL can be skipped with continuous listening mode You can optimize your voice satellite for a variety of platforms by selecting different plugin combinations","title":"Voice Satellite"},{"location":"07_voicesat/#hivemind-voice-satellite","text":"OpenVoiceOS Satellite, connect to HiveMind Built on top of ovos-dinkum-listener , ovos-audio and PHAL","title":"HiveMind Voice Satellite"},{"location":"07_voicesat/#install","text":"Install dependencies (if needed) sudo apt-get install -y libpulse-dev libasound2-dev Install with pip $ pip install HiveMind-voice-sat","title":"Install"},{"location":"07_voicesat/#usage","text":"Usage: hivemind-voice-sat [OPTIONS] connect to HiveMind Options: --host TEXT hivemind host --key TEXT Access Key --password TEXT Password for key derivation --port INTEGER HiveMind port number --selfsigned accept self signed certificates --help Show this message and exit.","title":"Usage"},{"location":"07_voicesat/#configuration","text":"Voice satellite is built on top of ovos-listener and ovos-audio , it uses the same OpenVoiceOS configuration ~/.config/mycroft/mycroft.conf Supported plugins: Plugin Type Description Required Link Microphone Captures voice input Yes Microphone VAD Voice Activity Detection Yes VAD WakeWord Detects wake words for interaction Yes* WakeWord STT speech-to-text (STT) Yes STT TTS text-to-speech (TTS) Yes TTS G2P grapheme-to-phoneme (G2P), used to simulate mouth movements No G2P Media Playback Plugins Enables media playback (e.g., \"play Metallica\") No Media Playback Plugins OCP Plugins Provides playback support for URLs (e.g., YouTube) No OCP Plugins Audio Transformers Processes audio before speech-to-text (STT) No Audio Transformers Dialog Transformers Processes text before text-to-speech (TTS) No Dialog Transformers TTS Transformers Processes audio after text-to-speech (TTS) No TTS Transformers PHAL Provides platform-specific support (e.g., Mark 1) No PHAL can be skipped with continuous listening mode You can optimize your voice satellite for a variety of platforms by selecting different plugin combinations","title":"Configuration"},{"location":"08_persona/","text":"Persona this is a hivemind Master node, but it is running ovos-persona instead of connecting to ovos-core you can use this to expose chatbots and LLMs via hivemind, satellites made for hivemind-core should be compatible \u26a0\ufe0f Satellites made specifically for hivemind-listener (Sound server) will not work with hivemind-persona ! Install pip install hivemind-persona ChatGPT Install the OpenAI solver create a chatgpt.json { \"name\": \"ChatGPT\", \"solvers\": [ \"ovos-solver-openai-persona-plugin\" ], \"ovos-solver-openai-persona-plugin\": { \"api_url\": \"\", \"key\": \"\", \"persona\": \"helpful, creative, clever, and very friendly.\" } } launch hivemind-persona with the created file hivemind-persona --persona chatgpt.json","title":"Persona Server"},{"location":"08_persona/#persona","text":"this is a hivemind Master node, but it is running ovos-persona instead of connecting to ovos-core you can use this to expose chatbots and LLMs via hivemind, satellites made for hivemind-core should be compatible \u26a0\ufe0f Satellites made specifically for hivemind-listener (Sound server) will not work with hivemind-persona !","title":"Persona"},{"location":"08_persona/#install","text":"pip install hivemind-persona","title":"Install"},{"location":"08_persona/#chatgpt","text":"Install the OpenAI solver create a chatgpt.json { \"name\": \"ChatGPT\", \"solvers\": [ \"ovos-solver-openai-persona-plugin\" ], \"ovos-solver-openai-persona-plugin\": { \"api_url\": \"\", \"key\": \"\", \"persona\": \"helpful, creative, clever, and very friendly.\" } } launch hivemind-persona with the created file hivemind-persona --persona chatgpt.json","title":"ChatGPT"},{"location":"09_matrix/","text":"HiveMind - Matrix bridge What is it? Matrix is a chat protocol, it works a little like email, but instantaneous and secure: You need to register an account at a provider Whatever your provider is, you can talk to people using other providers In the same way you can use Outlook or Thunderbird with the same email account, you can use different Matrix apps for the same Matrix account. Install Install from Github Usage Usage: HiveMind-matrix run [OPTIONS] connect a matrix chatroom to hivemind Options: --botname TEXT thehivebot --matrixtoken TEXT syt_dGhl..... --matrixhost TEXT https://matrix.org --room TEXT #hivemind-bots:matrix.org --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --help Show this message and exit.","title":"Matrix"},{"location":"09_matrix/#hivemind-matrix-bridge","text":"What is it? Matrix is a chat protocol, it works a little like email, but instantaneous and secure: You need to register an account at a provider Whatever your provider is, you can talk to people using other providers In the same way you can use Outlook or Thunderbird with the same email account, you can use different Matrix apps for the same Matrix account.","title":"HiveMind - Matrix bridge"},{"location":"09_matrix/#install","text":"Install from Github","title":"Install"},{"location":"09_matrix/#usage","text":"Usage: HiveMind-matrix run [OPTIONS] connect a matrix chatroom to hivemind Options: --botname TEXT thehivebot --matrixtoken TEXT syt_dGhl..... --matrixhost TEXT https://matrix.org --room TEXT #hivemind-bots:matrix.org --key TEXT HiveMind access key (default read from identity file) --password TEXT HiveMind password (default read from identity file) --host TEXT HiveMind host (default read from identity file) --port INTEGER HiveMind port number (default: 5678) --help Show this message and exit.","title":"Usage"},{"location":"10_deltachat/","text":"DeltaChat Bridge Delta Chat is a messaging app that works over e-mail End-to-End Encryption using Autocrypt and CounterMITM protocols, with multiple security audits. Install $ pip install HiveMind-deltachat-bridge Usage $ hm-deltachat-bridge --help usage: __main__.py [-h] --access_key ACCESS_KEY --email EMAIL --password PASSWORD [--crypto_key CRYPTO_KEY] [--name NAME] [--host HOST] [--port PORT] optional arguments: -h, --help show this help message and exit --access_key ACCESS_KEY hivemind access key --email EMAIL deltachat email --password PASSWORD deltachat password --crypto_key CRYPTO_KEY payload encryption key --name NAME human readable device name --host HOST HiveMind host --port PORT HiveMind port number","title":"DeltaChat"},{"location":"10_deltachat/#deltachat-bridge","text":"Delta Chat is a messaging app that works over e-mail End-to-End Encryption using Autocrypt and CounterMITM protocols, with multiple security audits.","title":"DeltaChat Bridge"},{"location":"10_deltachat/#install","text":"$ pip install HiveMind-deltachat-bridge","title":"Install"},{"location":"10_deltachat/#usage","text":"$ hm-deltachat-bridge --help usage: __main__.py [-h] --access_key ACCESS_KEY --email EMAIL --password PASSWORD [--crypto_key CRYPTO_KEY] [--name NAME] [--host HOST] [--port PORT] optional arguments: -h, --help show this help message and exit --access_key ACCESS_KEY hivemind access key --email EMAIL deltachat email --password PASSWORD deltachat password --crypto_key CRYPTO_KEY payload encryption key --name NAME human readable device name --host HOST HiveMind host --port PORT HiveMind port number","title":"Usage"},{"location":"11_devs/","text":"HiveMind makes heavy use of OVOS technology as imported libraries, additional client libraries are provided that implement the HiveMind protocol Libraries If you are implementing a client from scratch, these libraries might be of interest HiveMind-websocket-client (python) HiveMindJs (javascript) ovos-solver-hivemind-plugin python client to chat with hivemind","title":"Libraries"},{"location":"11_devs/#libraries","text":"If you are implementing a client from scratch, these libraries might be of interest HiveMind-websocket-client (python) HiveMindJs (javascript) ovos-solver-hivemind-plugin python client to chat with hivemind","title":"Libraries"},{"location":"12_handshake/","text":"Handshake Protocol This document provides an overview of the handshake protocol used in the HiveMind system, detailing how handshakes are initiated and processed from both the client (slave) and server (master) perspectives. The handshake process establishes a secure connection between a HiveMind master and its slaves. It ensures authentication, optionally using passwords or public/private key pairs, and sets up cryptographic keys for secure communication. For detailed code and various usage examples, you can refer to the Poorman Handshake GitHub Repository . Handshake Types Password-Based Handshake : Utilizes a shared password for authentication. Requires both client and server to know the password beforehand. Public Key Handshake : Based on public/private key pairs. The server provides a public key to the client, and the client verifies the server's authenticity. Supports implicit trust for first-time connections (when no public key is available). Uses asymmetric encryption (RSA, for example) to ensure that communication is secure and cannot be intercepted or modified. Encrypts the symmetric session key to allow further communication using the shared key. Workflow: Server Perspective HELLO Message : The server sends a HELLO message to the client containing: Public key ( pubkey ) for key-based handshake. Node ID ( node_id ) for identification. Optional session_id for session-based communication. HANDSHAKE Request : The server initiates the handshake by sending a HANDSHAKE message: Specifies whether to use password-based or public-key-based authentication. Includes optional fields like: crypto_key : A flag indicating whether a pre-shared cryptographic key is available for use in the handshake (but not the key itself). binarize : Flag for binary protocol support. password : Indicator for password-based handshake. Validate Client's Response : If the client provides an envelope: Validate the client's response using the shared password or public key. Update the cryptographic key for secure communication. If the crypto_key flag is set or the client doesnt answer the handshake, use the pre-shared cryptographic key directly, skipping the handshake step. Workflow: Client Perspective Receive HELLO Message : Extract the server's public key and node ID from the HELLO message. Store the session ID if provided. Start Handshake : Determine the handshake type based on the server's HANDSHAKE request: Password-based handshake: Generate an envelope using the shared password. Public-key-based handshake: Verify the server's public key (if available). Generate and send an envelope for authentication. Handle Validation : If the server sends an envelope for validation: Verify the server's authenticity using the shared password or public key. Update the cryptographic key for secure communication. Handshake Message Structure HELLO Message From Server : { \"type\": \"HELLO\", \"payload\": { \"pubkey\": \"\", \"node_id\": \"\", \"session_id\": \"\" } } HANDSHAKE Message From Server : { \"type\": \"HANDSHAKE\", \"payload\": { \"password\": \"\", \"crypto_key\": \"\", \"binarize\": \"\", \"envelope\": \"\" } } From Client : { \"type\": \"HANDSHAKE\", \"payload\": { \"pubkey\": \"\", \"envelope\": \"\", \"binarize\": \"\", \"session\": \"\", \"site_id\": \"\" } } Key Functions and Responsibilities Server Start Handshake : Ensure the client is authorized to join the HiveMind network. Broadcast Key : Send the server's public key for public-key-based handshakes. Verify Envelope : Authenticate the client using the received envelope and establish the shared cryptographic key. Client Generate Envelope : Create an envelope for authentication based on the handshake type. Verify Server : Use the public key to verify the server's authenticity. Update Session : Store the server-provided session ID and synchronize it with local sessions. Secure Communication After Handshake Upon successful handshake: A shared cryptographic key is established between the server and the client. All further communication between the server and client is encrypted using this symmetric key (e.g., AES-256). The session ID ensures continuity and identification in multi-session environments. This guarantees that all data exchanged between the server and the client is protected, even if intercepted by a third party. Error Handling Illegal Messages : Messages not adhering to the protocol are logged, and the connection may be terminated. Handshake Failures : Authentication failures result in handshake termination and rejection of the connection. Example Scenarios First-Time Connection (Implicit Trust) Server sends HELLO with its public key. Client trusts the server and starts the handshake. A shared cryptographic key is established for encrypted communication. Reconnection with Password Server requests a password-based handshake. Client generates an envelope using the shared password. Server validates the envelope and establishes a secure session. For detailed code and various usage examples, please refer to the Poorman Handshake GitHub Repository .","title":"Handshake"},{"location":"12_handshake/#handshake-protocol","text":"This document provides an overview of the handshake protocol used in the HiveMind system, detailing how handshakes are initiated and processed from both the client (slave) and server (master) perspectives. The handshake process establishes a secure connection between a HiveMind master and its slaves. It ensures authentication, optionally using passwords or public/private key pairs, and sets up cryptographic keys for secure communication. For detailed code and various usage examples, you can refer to the Poorman Handshake GitHub Repository .","title":"Handshake Protocol"},{"location":"12_handshake/#handshake-types","text":"Password-Based Handshake : Utilizes a shared password for authentication. Requires both client and server to know the password beforehand. Public Key Handshake : Based on public/private key pairs. The server provides a public key to the client, and the client verifies the server's authenticity. Supports implicit trust for first-time connections (when no public key is available). Uses asymmetric encryption (RSA, for example) to ensure that communication is secure and cannot be intercepted or modified. Encrypts the symmetric session key to allow further communication using the shared key.","title":"Handshake Types"},{"location":"12_handshake/#workflow-server-perspective","text":"HELLO Message : The server sends a HELLO message to the client containing: Public key ( pubkey ) for key-based handshake. Node ID ( node_id ) for identification. Optional session_id for session-based communication. HANDSHAKE Request : The server initiates the handshake by sending a HANDSHAKE message: Specifies whether to use password-based or public-key-based authentication. Includes optional fields like: crypto_key : A flag indicating whether a pre-shared cryptographic key is available for use in the handshake (but not the key itself). binarize : Flag for binary protocol support. password : Indicator for password-based handshake. Validate Client's Response : If the client provides an envelope: Validate the client's response using the shared password or public key. Update the cryptographic key for secure communication. If the crypto_key flag is set or the client doesnt answer the handshake, use the pre-shared cryptographic key directly, skipping the handshake step.","title":"Workflow: Server Perspective"},{"location":"12_handshake/#workflow-client-perspective","text":"Receive HELLO Message : Extract the server's public key and node ID from the HELLO message. Store the session ID if provided. Start Handshake : Determine the handshake type based on the server's HANDSHAKE request: Password-based handshake: Generate an envelope using the shared password. Public-key-based handshake: Verify the server's public key (if available). Generate and send an envelope for authentication. Handle Validation : If the server sends an envelope for validation: Verify the server's authenticity using the shared password or public key. Update the cryptographic key for secure communication.","title":"Workflow: Client Perspective"},{"location":"12_handshake/#handshake-message-structure","text":"","title":"Handshake Message Structure"},{"location":"12_handshake/#hello-message","text":"From Server : { \"type\": \"HELLO\", \"payload\": { \"pubkey\": \"\", \"node_id\": \"\", \"session_id\": \"\" } }","title":"HELLO Message"},{"location":"12_handshake/#handshake-message","text":"From Server : { \"type\": \"HANDSHAKE\", \"payload\": { \"password\": \"\", \"crypto_key\": \"\", \"binarize\": \"\", \"envelope\": \"\" } } From Client : { \"type\": \"HANDSHAKE\", \"payload\": { \"pubkey\": \"\", \"envelope\": \"\", \"binarize\": \"\", \"session\": \"\", \"site_id\": \"\" } }","title":"HANDSHAKE Message"},{"location":"12_handshake/#key-functions-and-responsibilities","text":"","title":"Key Functions and Responsibilities"},{"location":"12_handshake/#server","text":"Start Handshake : Ensure the client is authorized to join the HiveMind network. Broadcast Key : Send the server's public key for public-key-based handshakes. Verify Envelope : Authenticate the client using the received envelope and establish the shared cryptographic key.","title":"Server"},{"location":"12_handshake/#client","text":"Generate Envelope : Create an envelope for authentication based on the handshake type. Verify Server : Use the public key to verify the server's authenticity. Update Session : Store the server-provided session ID and synchronize it with local sessions.","title":"Client"},{"location":"12_handshake/#secure-communication-after-handshake","text":"Upon successful handshake: A shared cryptographic key is established between the server and the client. All further communication between the server and client is encrypted using this symmetric key (e.g., AES-256). The session ID ensures continuity and identification in multi-session environments. This guarantees that all data exchanged between the server and the client is protected, even if intercepted by a third party.","title":"Secure Communication After Handshake"},{"location":"12_handshake/#error-handling","text":"Illegal Messages : Messages not adhering to the protocol are logged, and the connection may be terminated. Handshake Failures : Authentication failures result in handshake termination and rejection of the connection.","title":"Error Handling"},{"location":"12_handshake/#example-scenarios","text":"","title":"Example Scenarios"},{"location":"12_handshake/#first-time-connection-implicit-trust","text":"Server sends HELLO with its public key. Client trusts the server and starts the handshake. A shared cryptographic key is established for encrypted communication.","title":"First-Time Connection (Implicit Trust)"},{"location":"12_handshake/#reconnection-with-password","text":"Server requests a password-based handshake. Client generates an envelope using the shared password. Server validates the envelope and establishes a secure session. For detailed code and various usage examples, please refer to the Poorman Handshake GitHub Repository .","title":"Reconnection with Password"},{"location":"13_mycroft/","text":"OpenVoiceOS Messages The OpenVoiceOS messagebus is considered an internal and private websocket for minds , clients do not connect directly to it. A mind will inject its own context about the originating clients, only responses to the client message will be forwarded, this provides client isolation. A mind will filter incoming and outgoing messages per client, the permissions model of the hivemind is extensive e.g. it might refuse utterances based on the intent This info applies to ovos-core , Hivemind depends on this functionality but it is not part of the hivemind itself. HiveMind responsibility is only to deliver the BUS messages From the POV of the Hivemind you can replace ovos-core with anything as long as you respect the mechanisms below Message Targeting Theory Sources Destinations Skills Message A OpenVoiceOS message consists of a json payload, it contains a type , some data and a context . The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty. You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data . Targeting Theory ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for. the Message object provides the following methods: message.forward method, keeps previous context. message continues going to same destination message.reply method swaps \"source\" with \"destination\" message goes back to source The context destination parameter in the original message can be set to a list with any number of intended targets: bus.emit(Message('recognizer_loop:utterance', data, context={'destination': ['audio', 'kde'], 'source': \"remote_service\")) Sources ovos-core injects the context when it emits an utterance, this can be either typed or spoken via OVOS STT service STT will identify itself as audio mycroft.conf defines a list of \"native_sources\" , by default only audio is a native source Destinations Output capable services are ovos-audio (TTS, music...) TTS checks the message context if it's the intended target for the message and will only speak in the following conditions: Explicitly targeted i.e. the destination is native_source (default: \"audio\") destination is set to None destination is missing completely The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak. TTS will be executed when a native_source (eg, audio ) is the destination A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the ovos-audio process, a web-interface, the KDE plasmoid or maybe the android app) OVOS-Core ovos-core is responsible for managing the routing context, skills do not usually need to worry about any of this intent service will .reply to the original utterance message all skill/intent service messages are .forward (from previous intent service .reply ) Skills OpenVoiceOS skills can do anything, if you are developing/installing a mission critical skill carefully evaluate what it does and evaluate if it is hivemind friendly If a skill emits it's own bus messages it needs to keep message.context around Common issues : skills sending their own messages might not keep message.context or wrongly .reply to it in the context of the Hivemind skills might not be Session aware and keep a shared state between clients, eg. a client may enable a voice game for everyone","title":"OVOS Messages"},{"location":"13_mycroft/#openvoiceos-messages","text":"The OpenVoiceOS messagebus is considered an internal and private websocket for minds , clients do not connect directly to it. A mind will inject its own context about the originating clients, only responses to the client message will be forwarded, this provides client isolation. A mind will filter incoming and outgoing messages per client, the permissions model of the hivemind is extensive e.g. it might refuse utterances based on the intent This info applies to ovos-core , Hivemind depends on this functionality but it is not part of the hivemind itself. HiveMind responsibility is only to deliver the BUS messages From the POV of the Hivemind you can replace ovos-core with anything as long as you respect the mechanisms below Message Targeting Theory Sources Destinations Skills","title":"OpenVoiceOS Messages"},{"location":"13_mycroft/#message","text":"A OpenVoiceOS message consists of a json payload, it contains a type , some data and a context . The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty. You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data .","title":"Message"},{"location":"13_mycroft/#targeting-theory","text":"ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for. the Message object provides the following methods: message.forward method, keeps previous context. message continues going to same destination message.reply method swaps \"source\" with \"destination\" message goes back to source The context destination parameter in the original message can be set to a list with any number of intended targets: bus.emit(Message('recognizer_loop:utterance', data, context={'destination': ['audio', 'kde'], 'source': \"remote_service\"))","title":"Targeting Theory"},{"location":"13_mycroft/#sources","text":"ovos-core injects the context when it emits an utterance, this can be either typed or spoken via OVOS STT service STT will identify itself as audio mycroft.conf defines a list of \"native_sources\" , by default only audio is a native source","title":"Sources"},{"location":"13_mycroft/#destinations","text":"Output capable services are ovos-audio (TTS, music...) TTS checks the message context if it's the intended target for the message and will only speak in the following conditions: Explicitly targeted i.e. the destination is native_source (default: \"audio\") destination is set to None destination is missing completely The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak. TTS will be executed when a native_source (eg, audio ) is the destination A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the ovos-audio process, a web-interface, the KDE plasmoid or maybe the android app)","title":"Destinations"},{"location":"13_mycroft/#ovos-core","text":"ovos-core is responsible for managing the routing context, skills do not usually need to worry about any of this intent service will .reply to the original utterance message all skill/intent service messages are .forward (from previous intent service .reply )","title":"OVOS-Core"},{"location":"13_mycroft/#skills","text":"OpenVoiceOS skills can do anything, if you are developing/installing a mission critical skill carefully evaluate what it does and evaluate if it is hivemind friendly If a skill emits it's own bus messages it needs to keep message.context around Common issues : skills sending their own messages might not keep message.context or wrongly .reply to it in the context of the Hivemind skills might not be Session aware and keep a shared state between clients, eg. a client may enable a voice game for everyone","title":"Skills"},{"location":"14_localhive/","text":"LocalHive \u26a0\ufe0f Proof of Concept : not actively maintained The LocalHive is a hardened OpenVoiceOS skills service, the messagebus is replaced with a hivemind connection *Coming Soon - Github \"security as a requirement, not a feature\" the LocalHive is HTTP only the LocalHive uses no crypto the LocalHive does not require accessKey, instead it only accepts connections coming from 0.0.0.0 the LocalHive rejects all connections not coming from 0.0.0.0 the LocalHive runs in port 6989 skills can not listen to each other's traffic skills can only inject whitelisted messages to LocalHive (by default intents + converse + speak) by default skills only register and trigger intents, nothing else each skill can run in it's own .venv with it's own requirements TODO - skills should be able to request to listen for specific messages, cross skill communication is currently impossible Permissions skills need to be explicitly authorizes to send each message_type UTTERANCES = [\"recognizer_loop:utterance\"] INTENTS = [ \"mycroft.skill.handler.start\", \"mycroft.skill.handler.complete\", \"intent.service.intent.get\", \"intent.service.skills.get\", \"intent.service.active_skills.get\", \"intent.service.adapt.get\", \"intent.service.padatious.get\", \"intent.service.adapt.manifest.get\", \"intent.service.padatious.manifest.get\", \"intent.service.adapt.vocab.manifest.get\", \"intent.service.padatious.entities.manifest.get\", \"register_vocab\", \"register_intent\", \"detach_intent\", \"detach_skill\", \"add_context\", \"remove_context\", \"clear_context\", 'padatious:register_intent', 'padatious:register_entity', \"mycroft.skill.set_cross_context\", \"mycroft.skill.remove_cross_context\" ] CONVERSE = [ \"skill.converse.response\", \"skill.converse.pong\", \"active_skill_request\", \"intent.service.skills.activated\", \"intent.service.skills.deactivated\", ] SPEAK = [\"speak\"] STOP = [\"mycroft.stop\"] DEFAULT = INTENTS + \\ CONVERSE + \\ SPEAK + \\ [\"mycroft.skills.loaded\"] Per Skill Permissions you can allow new messages per skill_id by editing the json file at ~/.config/LocalHive/skill_permissions.json { \"ovos-stop.openvoiceos\": [\"mycroft.stop\"] } Usage At this point of development you need to create python scripts to directly interface with the existing code Running LocalHive from local_hive.service import LocalHiveService from ovos_utils import wait_for_exit_signal if __name__ == \"__main__\": localmind = LocalHiveService() localmind.start() wait_for_exit_signal() Connecting a skill from local_hive.loader import HiveMindExternalSkillWrapper from ovos_utils import wait_for_exit_signal from os.path import join, dirname path = join(dirname(__file__), \"test_skills\", \"mycroft-joke.mycroftai\") skill = HiveMindExternalSkillWrapper(path) wait_for_exit_signal() \"\"\" 2021-05-14 03:23:50.059 | INFO | 44310 | HiveMind-websocket-client | Connected 2021-05-14 03:23:50.111 | INFO | 44310 | mycroft.skills.settings:get_local_settings:83 | /home/user/.config/mycroft/skills/mycroft-joke.mycroftai/settings.json \"\"\"","title":"LocalHive"},{"location":"14_localhive/#localhive","text":"\u26a0\ufe0f Proof of Concept : not actively maintained The LocalHive is a hardened OpenVoiceOS skills service, the messagebus is replaced with a hivemind connection *Coming Soon - Github \"security as a requirement, not a feature\" the LocalHive is HTTP only the LocalHive uses no crypto the LocalHive does not require accessKey, instead it only accepts connections coming from 0.0.0.0 the LocalHive rejects all connections not coming from 0.0.0.0 the LocalHive runs in port 6989 skills can not listen to each other's traffic skills can only inject whitelisted messages to LocalHive (by default intents + converse + speak) by default skills only register and trigger intents, nothing else each skill can run in it's own .venv with it's own requirements TODO - skills should be able to request to listen for specific messages, cross skill communication is currently impossible","title":"LocalHive"},{"location":"14_localhive/#permissions","text":"skills need to be explicitly authorizes to send each message_type UTTERANCES = [\"recognizer_loop:utterance\"] INTENTS = [ \"mycroft.skill.handler.start\", \"mycroft.skill.handler.complete\", \"intent.service.intent.get\", \"intent.service.skills.get\", \"intent.service.active_skills.get\", \"intent.service.adapt.get\", \"intent.service.padatious.get\", \"intent.service.adapt.manifest.get\", \"intent.service.padatious.manifest.get\", \"intent.service.adapt.vocab.manifest.get\", \"intent.service.padatious.entities.manifest.get\", \"register_vocab\", \"register_intent\", \"detach_intent\", \"detach_skill\", \"add_context\", \"remove_context\", \"clear_context\", 'padatious:register_intent', 'padatious:register_entity', \"mycroft.skill.set_cross_context\", \"mycroft.skill.remove_cross_context\" ] CONVERSE = [ \"skill.converse.response\", \"skill.converse.pong\", \"active_skill_request\", \"intent.service.skills.activated\", \"intent.service.skills.deactivated\", ] SPEAK = [\"speak\"] STOP = [\"mycroft.stop\"] DEFAULT = INTENTS + \\ CONVERSE + \\ SPEAK + \\ [\"mycroft.skills.loaded\"]","title":"Permissions"},{"location":"14_localhive/#per-skill-permissions","text":"you can allow new messages per skill_id by editing the json file at ~/.config/LocalHive/skill_permissions.json { \"ovos-stop.openvoiceos\": [\"mycroft.stop\"] }","title":"Per Skill Permissions"},{"location":"14_localhive/#usage","text":"At this point of development you need to create python scripts to directly interface with the existing code","title":"Usage"},{"location":"14_localhive/#running-localhive","text":"from local_hive.service import LocalHiveService from ovos_utils import wait_for_exit_signal if __name__ == \"__main__\": localmind = LocalHiveService() localmind.start() wait_for_exit_signal()","title":"Running LocalHive"},{"location":"14_localhive/#connecting-a-skill","text":"from local_hive.loader import HiveMindExternalSkillWrapper from ovos_utils import wait_for_exit_signal from os.path import join, dirname path = join(dirname(__file__), \"test_skills\", \"mycroft-joke.mycroftai\") skill = HiveMindExternalSkillWrapper(path) wait_for_exit_signal() \"\"\" 2021-05-14 03:23:50.059 | INFO | 44310 | HiveMind-websocket-client | Connected 2021-05-14 03:23:50.111 | INFO | 44310 | mycroft.skills.settings:get_local_settings:83 | /home/user/.config/mycroft/skills/mycroft-joke.mycroftai/settings.json \"\"\"","title":"Connecting a skill"},{"location":"15_nested/","text":"Nested Hives Now that you have a basic hive setup, you can add more Minds to it and connect them to each other. To better understand how minds can interact with each read the protocol Nested Hiveminds in Action To illustrate the power of nested Hiveminds, let's consider a scenario where two housemates, let's call them Mom and Dad, each have their own AI assistant running on OpenVoiceOS, named John and Jane, respectively. While sharing a house and most of their IoT devices, Mom and Dad want to ensure that their AI assistants can control the smart home individually without interfering with each other's commands. To achieve this, they create a Hive for their house, naming it George, with at least one instance of OpenVoiceOS acting as the brain. Mom and Dad then connect their AI assistants, John and Jane, as clients to the George Hive. This setup allows John and Jane to communicate with George individually but not directly with each other. Instead, their messages pass through George, which acts as an intermediary, ensuring proper communication flow. John is connected to Dad's phone and calendar, it knows Dad's favorite songs. This ensures George is not bothered with personal data, and that Dad has a personalized experience. The same holds true for Jane and Mom, Alarms and Music Playlists don't get mixed together It is important to note that as soon as a Hive is decoupled, such as when Mom and Dad split their Hives, they become their own independent Masters again. Now, when Dad instructs his AI assistant to adjust the lights, the message goes through George. Similarly, when Mom asks her AI assistant to set the temperature, the command is routed through George. George becomes the central point of control for the shared devices, enabling independent control for John and Jane. Moreover, if guests visit their house, Mom and Dad can grant them access to George directly, for example by using the voice satellites around the house, or they can create a guest Hive under George temporarily. This flexibility allows for easy integration and disconnection of Hives as required. Nested Hiveminds offer a dynamic and adaptable environment for managing AI systems and devices. This hierarchical organization, with clusters nested within a Master Hive, provides a scalable and efficient framework Permissions Let's further explore the practical applications of nested Hiveminds by introducing another scenario. Imagine that Mom and Dad have a guest staying with them, and this guest, Bob, also has his own AI assistant. To provide Bob with access to the shared smart home functionalities, they allow Bob's AI assistant to connect to the George Hive as a client. However, Mom and Dad want to ensure that Bob's AI assistant has limited permissions within their ecosystem. They configure hivemind-core, acting as a firewall, to restrict Bob's assistant from placing orders or accessing sensitive information from Mom and Dad. This fine-grained control ensures that the guest AI operates within defined boundaries, maintaining privacy and security for all parties involved. Furthermore, consider a scenario where Mom and Dad have children. They can create a separate nested assistant for their kids, granting them access to specific functionalities suitable for their age and requirements. This nested assistant for the kids would have limited permissions and tailored interactions, providing a safe and engaging AI experience while keeping their privacy intact. Nested Hiveminds offer a versatile framework for managing multiple AI assistants and customizing their capabilities based on individual needs and preferences. By configuring access permissions and setting up appropriate firewalls, users can create an ecosystem that ensures privacy, security, and personalized experiences for each participant.","title":"Nested Hives"},{"location":"15_nested/#nested-hives","text":"Now that you have a basic hive setup, you can add more Minds to it and connect them to each other. To better understand how minds can interact with each read the protocol","title":"Nested Hives"},{"location":"15_nested/#nested-hiveminds-in-action","text":"To illustrate the power of nested Hiveminds, let's consider a scenario where two housemates, let's call them Mom and Dad, each have their own AI assistant running on OpenVoiceOS, named John and Jane, respectively. While sharing a house and most of their IoT devices, Mom and Dad want to ensure that their AI assistants can control the smart home individually without interfering with each other's commands. To achieve this, they create a Hive for their house, naming it George, with at least one instance of OpenVoiceOS acting as the brain. Mom and Dad then connect their AI assistants, John and Jane, as clients to the George Hive. This setup allows John and Jane to communicate with George individually but not directly with each other. Instead, their messages pass through George, which acts as an intermediary, ensuring proper communication flow. John is connected to Dad's phone and calendar, it knows Dad's favorite songs. This ensures George is not bothered with personal data, and that Dad has a personalized experience. The same holds true for Jane and Mom, Alarms and Music Playlists don't get mixed together It is important to note that as soon as a Hive is decoupled, such as when Mom and Dad split their Hives, they become their own independent Masters again. Now, when Dad instructs his AI assistant to adjust the lights, the message goes through George. Similarly, when Mom asks her AI assistant to set the temperature, the command is routed through George. George becomes the central point of control for the shared devices, enabling independent control for John and Jane. Moreover, if guests visit their house, Mom and Dad can grant them access to George directly, for example by using the voice satellites around the house, or they can create a guest Hive under George temporarily. This flexibility allows for easy integration and disconnection of Hives as required. Nested Hiveminds offer a dynamic and adaptable environment for managing AI systems and devices. This hierarchical organization, with clusters nested within a Master Hive, provides a scalable and efficient framework","title":"Nested Hiveminds in Action"},{"location":"15_nested/#permissions","text":"Let's further explore the practical applications of nested Hiveminds by introducing another scenario. Imagine that Mom and Dad have a guest staying with them, and this guest, Bob, also has his own AI assistant. To provide Bob with access to the shared smart home functionalities, they allow Bob's AI assistant to connect to the George Hive as a client. However, Mom and Dad want to ensure that Bob's AI assistant has limited permissions within their ecosystem. They configure hivemind-core, acting as a firewall, to restrict Bob's assistant from placing orders or accessing sensitive information from Mom and Dad. This fine-grained control ensures that the guest AI operates within defined boundaries, maintaining privacy and security for all parties involved. Furthermore, consider a scenario where Mom and Dad have children. They can create a separate nested assistant for their kids, granting them access to specific functionalities suitable for their age and requirements. This nested assistant for the kids would have limited permissions and tailored interactions, providing a safe and engaging AI experience while keeping their privacy intact. Nested Hiveminds offer a versatile framework for managing multiple AI assistants and customizing their capabilities based on individual needs and preferences. By configuring access permissions and setting up appropriate firewalls, users can create an ecosystem that ensures privacy, security, and personalized experiences for each participant.","title":"Permissions"},{"location":"16_permissions/","text":"HiveMind Permission System HiveMind's permission system provides fine-grained control over access to resources, such as bus messages, skills, and intents, on a per-client basis. Unlike traditional Role-Based Access Control (RBAC), HiveMind emphasizes client-specific configurations rather than predefined roles, allowing for dynamic and flexible access management. Key Concepts Client-Specific Permissions : Permissions in HiveMind are assigned to individual clients , such as users, devices, or applications. This means that each client can have a unique set of permissions based on its specific needs or restrictions. Permissions control access to bus messages, skills, and intents, enabling dynamic configuration that is more granular and flexible compared to traditional RBAC systems. No Predefined Roles : HiveMind does not rely on predefined roles like \u201cadmin\u201d or \u201cuser.\u201d Instead, each client is configured independently with a tailored set of permissions. For instance, a \u201cbasic client\u201d might have access to general voice commands, while a \u201crestricted client\u201d could have specific skills or intents blocked. Fine-Grained Access Control : Permissions are not just binary (e.g., \u201callowed\u201d or \u201cdenied\u201d). They can be configured at a fine-grained level , allowing administrators to control access to specific resources, such as individual bus messages, skills, and intents. This allows for maximum flexibility in defining which clients have access to what, down to the level of individual interactions. Emergent Roles : While there are no formal roles in HiveMind, roles can emerge through client-specific configurations. For example, a client with broad access might function like an \"admin,\" while another client with limited access could serve as a \"guest.\" These roles are not predefined but are dynamically created based on the client\u2019s permission settings. Comparison to Traditional RBAC Feature Traditional RBAC HiveMind Permission System Role Definition Predefined roles (e.g., admin, user, guest) No predefined roles; permissions are assigned per client Permissions Roles are granted permissions to access resources Permissions are configured on a per-client basis Granularity Roles typically have broad access to resources Permissions are fine-grained, allowing access control over individual resources (messages, skills, intents) Flexibility Less flexible, roles are static Highly flexible, permissions can be dynamically adjusted per client Emergent Roles Predefined roles based on job function or hierarchy Roles emerge based on client-specific configuration How It Works Client Configuration : Each client in the HiveMind ecosystem has a custom configuration that determines the actions it is allowed to perform. This configuration can be adjusted dynamically as needed. Dynamic Permission Assignment : Permissions are assigned on a per-client basis , providing administrators with the ability to specify which bus messages, skills, and intents each client can access or perform. Examples : A trusted client might be granted access to a wide range of skills and intents, including those requiring elevated privileges. A restricted client could have specific actions or skills blacklisted to ensure it operates within a tightly controlled scope. By leveraging client-specific configurations, HiveMind's permission system offers a highly customizable and secure approach to managing access across the ecosystem, allowing administrators to tailor the experience for each client based on their individual needs.","title":"Permissions"},{"location":"16_permissions/#hivemind-permission-system","text":"HiveMind's permission system provides fine-grained control over access to resources, such as bus messages, skills, and intents, on a per-client basis. Unlike traditional Role-Based Access Control (RBAC), HiveMind emphasizes client-specific configurations rather than predefined roles, allowing for dynamic and flexible access management.","title":"HiveMind Permission System"},{"location":"16_permissions/#key-concepts","text":"Client-Specific Permissions : Permissions in HiveMind are assigned to individual clients , such as users, devices, or applications. This means that each client can have a unique set of permissions based on its specific needs or restrictions. Permissions control access to bus messages, skills, and intents, enabling dynamic configuration that is more granular and flexible compared to traditional RBAC systems. No Predefined Roles : HiveMind does not rely on predefined roles like \u201cadmin\u201d or \u201cuser.\u201d Instead, each client is configured independently with a tailored set of permissions. For instance, a \u201cbasic client\u201d might have access to general voice commands, while a \u201crestricted client\u201d could have specific skills or intents blocked. Fine-Grained Access Control : Permissions are not just binary (e.g., \u201callowed\u201d or \u201cdenied\u201d). They can be configured at a fine-grained level , allowing administrators to control access to specific resources, such as individual bus messages, skills, and intents. This allows for maximum flexibility in defining which clients have access to what, down to the level of individual interactions. Emergent Roles : While there are no formal roles in HiveMind, roles can emerge through client-specific configurations. For example, a client with broad access might function like an \"admin,\" while another client with limited access could serve as a \"guest.\" These roles are not predefined but are dynamically created based on the client\u2019s permission settings.","title":"Key Concepts"},{"location":"16_permissions/#comparison-to-traditional-rbac","text":"Feature Traditional RBAC HiveMind Permission System Role Definition Predefined roles (e.g., admin, user, guest) No predefined roles; permissions are assigned per client Permissions Roles are granted permissions to access resources Permissions are configured on a per-client basis Granularity Roles typically have broad access to resources Permissions are fine-grained, allowing access control over individual resources (messages, skills, intents) Flexibility Less flexible, roles are static Highly flexible, permissions can be dynamically adjusted per client Emergent Roles Predefined roles based on job function or hierarchy Roles emerge based on client-specific configuration","title":"Comparison to Traditional RBAC"},{"location":"16_permissions/#how-it-works","text":"Client Configuration : Each client in the HiveMind ecosystem has a custom configuration that determines the actions it is allowed to perform. This configuration can be adjusted dynamically as needed. Dynamic Permission Assignment : Permissions are assigned on a per-client basis , providing administrators with the ability to specify which bus messages, skills, and intents each client can access or perform. Examples : A trusted client might be granted access to a wide range of skills and intents, including those requiring elevated privileges. A restricted client could have specific actions or skills blacklisted to ensure it operates within a tightly controlled scope. By leveraging client-specific configurations, HiveMind's permission system offers a highly customizable and secure approach to managing access across the ecosystem, allowing administrators to tailor the experience for each client based on their individual needs.","title":"How It Works"},{"location":"17_database/","text":"Database Backends hivemind-core supports multiple database backends to store client credentials and settings. Each has its own use case: Backend Use Case Default Location Command Line options JSON (default) Simple, file-based setup for local use ~/.local/share/hivemind-core/clients.json Configurable via --db-name and --db-folder SQLite Lightweight relational DB for single instances ~/.local/share/hivemind-core/clients.db Configurable via --db-name and --db-folder Redis Distributed, high-performance environments localhost:6379 Configurable via --redis-host and --redis-port \u26a0\ufe0f ensure you use the same database parameters when launching hivemind-core and registering clients! How to Choose? For scalability or multi-instance setups, use Redis. For simplicity or single-device environments, use SQLite. For development or to be able to edit the database by hand, use JSON. Security Considerations When using any of these backends, it\u2019s important to implement security practices to safeguard sensitive data. Below are some considerations: 1. JSON (File-Based Storage) Security Risks : As JSON files are stored locally, they can be accessed directly by anyone with access to the file system. Without encryption, the data is vulnerable to unauthorized access. Best Practices : File Permissions : Set restrictive permissions on the .json file to limit access to the user running hivemind-core . Backups : Regularly back up this file to ensure recovery in case of data loss or corruption, while also securing backups with encryption. 2. SQLite (Lightweight Relational Database) Security Risks : SQLite databases are stored in a file, making them susceptible to unauthorized access if file permissions are not properly configured. Best Practices : File Permissions : Ensure the SQLite file is owned by a specific user or group, with read and write access limited to only the user running hivemind-core . Database Backups : Always back up SQLite files securely and store backups in encrypted form. 3. Redis (Distributed High-Performance) Security Risks : Redis is commonly used in distributed setups, which can introduce risks if the Redis server is exposed to the internet or local networks without proper security measures. Best Practices : Authentication : Always configure Redis authentication by setting a strong password using the requirepass directive in the Redis configuration file. Encryption : Use TLS/SSL encryption ( --ssl flag) for data in transit. This ensures that data is encrypted between clients and Redis servers. Access Control : Limit access to Redis to trusted clients and IP addresses by configuring the bind and protected-mode settings in the Redis configuration file. Firewall : Use a firewall to restrict access to Redis from unauthorized networks, ensuring that only trusted systems can communicate with the Redis server. Backups : Redis does not encrypt its persistent storage by default, so ensure that backup files (RDB/AOF) are stored securely and encrypted if necessary. General Database Security Tips: Sensitive Data Storage : Ensure that sensitive data, such as database backups, is stored securely (using encryption) Regular Audits : Periodically audit your database access logs and configurations to ensure no unauthorized access has occurred. Monitoring : Implement monitoring on your database systems to detect any unusual access patterns or unauthorized attempts to connect. By following these best practices, you can ensure that your hivemind-core installation is secure and that client credentials and settings remain protected.","title":"Database"},{"location":"17_database/#database-backends","text":"hivemind-core supports multiple database backends to store client credentials and settings. Each has its own use case: Backend Use Case Default Location Command Line options JSON (default) Simple, file-based setup for local use ~/.local/share/hivemind-core/clients.json Configurable via --db-name and --db-folder SQLite Lightweight relational DB for single instances ~/.local/share/hivemind-core/clients.db Configurable via --db-name and --db-folder Redis Distributed, high-performance environments localhost:6379 Configurable via --redis-host and --redis-port \u26a0\ufe0f ensure you use the same database parameters when launching hivemind-core and registering clients! How to Choose? For scalability or multi-instance setups, use Redis. For simplicity or single-device environments, use SQLite. For development or to be able to edit the database by hand, use JSON.","title":"Database Backends"},{"location":"17_database/#security-considerations","text":"When using any of these backends, it\u2019s important to implement security practices to safeguard sensitive data. Below are some considerations:","title":"Security Considerations"},{"location":"17_database/#1-json-file-based-storage","text":"Security Risks : As JSON files are stored locally, they can be accessed directly by anyone with access to the file system. Without encryption, the data is vulnerable to unauthorized access. Best Practices : File Permissions : Set restrictive permissions on the .json file to limit access to the user running hivemind-core . Backups : Regularly back up this file to ensure recovery in case of data loss or corruption, while also securing backups with encryption.","title":"1. JSON (File-Based Storage)"},{"location":"17_database/#2-sqlite-lightweight-relational-database","text":"Security Risks : SQLite databases are stored in a file, making them susceptible to unauthorized access if file permissions are not properly configured. Best Practices : File Permissions : Ensure the SQLite file is owned by a specific user or group, with read and write access limited to only the user running hivemind-core . Database Backups : Always back up SQLite files securely and store backups in encrypted form.","title":"2. SQLite (Lightweight Relational Database)"},{"location":"17_database/#3-redis-distributed-high-performance","text":"Security Risks : Redis is commonly used in distributed setups, which can introduce risks if the Redis server is exposed to the internet or local networks without proper security measures. Best Practices : Authentication : Always configure Redis authentication by setting a strong password using the requirepass directive in the Redis configuration file. Encryption : Use TLS/SSL encryption ( --ssl flag) for data in transit. This ensures that data is encrypted between clients and Redis servers. Access Control : Limit access to Redis to trusted clients and IP addresses by configuring the bind and protected-mode settings in the Redis configuration file. Firewall : Use a firewall to restrict access to Redis from unauthorized networks, ensuring that only trusted systems can communicate with the Redis server. Backups : Redis does not encrypt its persistent storage by default, so ensure that backup files (RDB/AOF) are stored securely and encrypted if necessary.","title":"3. Redis (Distributed High-Performance)"},{"location":"17_database/#general-database-security-tips","text":"Sensitive Data Storage : Ensure that sensitive data, such as database backups, is stored securely (using encryption) Regular Audits : Periodically audit your database access logs and configurations to ensure no unauthorized access has occurred. Monitoring : Implement monitoring on your database systems to detect any unusual access patterns or unauthorized attempts to connect. By following these best practices, you can ensure that your hivemind-core installation is secure and that client credentials and settings remain protected.","title":"General Database Security Tips:"},{"location":"18_binarization/","text":"Binarization Protocol The HiveMind Binarization Protocol is designed to efficiently serialize and deserialize structured messages into compact binary formats for network transmission. This document provides a high-level description of the protocol, including its structure, encoding rules, and the rationale behind key design decisions. The binary format is protocol-versioned to support backward compatibility and future extensions. \ud83d\udca1 the binarization scheme allows the hivemind protocol to be implemented by just flashing a light Protocol Versions The protocol uses an integer version number to indicate supported features and ensure compatibility between clients and servers. The current protocol version is 1 . Any change in functionality or structure requires incrementing the version number. Version-specific functionality: Version 0 : Original protocol design. No binarization, no handshake, only pre-shared crypto_key supported Version 1 : Introduces support for handshakes and binary payloads. Message Types Messages in the HiveMind protocol are categorized into various types, each serving a specific role. These types are encoded as 5-bit unsigned integers, enabling up to 32 distinct types. Examples include: Value Type Description 0 HANDSHAKE Initial connection handshake. 1 BUS Standard message bus. 2 SHARED_BUS Shared bus for multiple nodes. 3 BROADCAST Global message broadcast. 4 PROPAGATE Directed message propagation. 12 BINARY Raw binary payload. Compression Payloads can optionally be compressed using the zlib library. A single bit in the header indicates whether compression is applied. Compressed payloads reduce transmission size but may add slight computational overhead during encoding and decoding. Metadata (HiveMeta) HiveMeta is a reserved field for attaching arbitrary metadata to a message. The metadata is encoded as a byte array, prefixed by its size (in bytes). This allows for extensible features like routing hints or debug information. Binary Message Structure The serialized binary message consists of a header and a payload. All fields are packed to maximize efficiency. The structure is as follows: Header The header contains information about the protocol version, message type, compression, and metadata length. Field Size (bits) Description Start Marker 1 Always 1 . Helps align message boundaries. Versioned Flag 1 Indicates if protocol version is specified. Protocol Version 8 Protocol version (if Versioned Flag is 1 ). Message Type 5 Encoded message type. Compressed Flag 1 Indicates if payload is compressed. Metadata Length 8 Length of metadata in bytes. Metadata Metadata is optional and encodes key-value pairs or other information. If present, it follows the header and is serialized as a byte array. The length of the metadata is specified in the header. Payload The payload represents the core message data. Its format depends on the message type: For standard messages : Encoded as a UTF-8 JSON string. For binary messages : Encoded as raw bytes with an additional 4-bit unsigned integer indicating the binary payload type. Padding To ensure byte alignment, padding bits ( 0 ) are inserted as needed. The total length of the message must be a multiple of 8 bits. Encoding Process Start Marker : Add a single bit set to 1 to signify the start of the message. Header Fields : Add a 1-bit flag to indicate whether the protocol version is included. If the version is included, append the 8-bit protocol version number. Add a 5-bit message type field. Add a 1-bit flag to indicate compression status. Add an 8-bit metadata length field. Metadata : Serialize metadata as a JSON object (if any). Compress the metadata if compression is enabled. Append the serialized metadata. Payload : Serialize the payload according to the message type. Compress the payload if compression is enabled. Append the serialized payload. Padding : Add 0 bits as needed to ensure the total length is a multiple of 8 bits. Decoding Process Alignment : Read bits until encountering the start marker ( 1 ). Header Fields : Read the Versioned Flag and determine if the protocol version is specified. If specified, read the 8-bit protocol version number. Read the 5-bit message type field. Read the Compressed Flag . Read the 8-bit metadata length field. Metadata : Read the specified number of bytes for metadata. Decompress if the Compressed Flag is set. Deserialize the metadata. Payload : Read the remaining bits as the payload. Decompress if the Compressed Flag is set. Deserialize the payload based on the message type. Binary Payloads The protocol provides support for binary payloads, enabling the transmission of non-textual data. Binary payloads are handled based on their designated types, which instruct the HiveMind how to process the binary content. The binary payload type is indicated in the header as a 4 bit unsigned integer after the metadata and before the payload Value Type Description 0 UNDEFINED No information provided about the binary contents. 1 RAW_AUDIO Binary content is raw audio. 2 NUMPY_IMAGE Binary content is an image represented as a numpy array (e.g., webcam picture). 3 FILE Binary is a file to be saved; additional metadata is provided elsewhere. 4 STT_AUDIO_TRANSCRIBE Full audio sentence to perform Speech-to-Text (STT) and return transcripts. 5 STT_AUDIO_HANDLE Full audio sentence to perform STT and handle transcription immediately. 6 TTS_AUDIO Synthesized Text-to-Speech (TTS) audio to be played. \ud83d\udca1 this how how the microphone satellite streams audio to hivemind-listener Examples Serialized Message For a simple message with: Protocol version: 1 Message type: BUS No compression Metadata: {} Payload: {\"type\": \"speak\", \"data\":{\"utterance\": \"Hello\"}} The binary representation might look like this (in bit groups): 1 | 1 | 00000001 | 00001 | 0 | 00000000 | | Where: 1 (Start Marker) 1 (Versioned Flag) 00000001 (Protocol Version) 00001 (Message Type: BUS ) 0 (Compressed Flag) 00000000 (Metadata Length: 0 bytes) : Serialized metadata bytes. : Serialized payload bytes. Binary data For a binary payload with: Protocol version: 1 Message type: BINARY No compression Metadata: {} Binary Payload The binary representation might look like this (in bit groups): 1 | 1 | 00000001 | 00001 | 0 | 00000000 | | 0001 | Where: 1 (Start Marker) 1 (Versioned Flag) 00000001 (Protocol Version) 01100 (Message Type: BINARY ) 0 (Compressed Flag) 00000000 (Metadata Length: 0 bytes) : Serialized metadata bytes. 0001 (Binary Type: Raw audio) : audio bytes. More examples | | | | | | | A binarized message 1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | | A unversioned binarized message 1 | 0 | XXXXX | X | XXXXXXXX | | A binary payload message 1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | | XXXX | Compression Metrics Compression significantly reduces payload size for larger messages but is not always efficient for small messages. Benchmarks indicate a reduction of up to 50% for text-heavy payloads, while small payloads may see negligible benefits. Implementation Notes Bit-level operations are critical for compact encoding. Ensure precision when handling individual bits. Maintain strict alignment rules to avoid deserialization errors. Use a modular design to allow future extensions while retaining compatibility.","title":"Binarization"},{"location":"18_binarization/#binarization-protocol","text":"The HiveMind Binarization Protocol is designed to efficiently serialize and deserialize structured messages into compact binary formats for network transmission. This document provides a high-level description of the protocol, including its structure, encoding rules, and the rationale behind key design decisions. The binary format is protocol-versioned to support backward compatibility and future extensions. \ud83d\udca1 the binarization scheme allows the hivemind protocol to be implemented by just flashing a light","title":"Binarization Protocol"},{"location":"18_binarization/#protocol-versions","text":"The protocol uses an integer version number to indicate supported features and ensure compatibility between clients and servers. The current protocol version is 1 . Any change in functionality or structure requires incrementing the version number. Version-specific functionality: Version 0 : Original protocol design. No binarization, no handshake, only pre-shared crypto_key supported Version 1 : Introduces support for handshakes and binary payloads.","title":"Protocol Versions"},{"location":"18_binarization/#message-types","text":"Messages in the HiveMind protocol are categorized into various types, each serving a specific role. These types are encoded as 5-bit unsigned integers, enabling up to 32 distinct types. Examples include: Value Type Description 0 HANDSHAKE Initial connection handshake. 1 BUS Standard message bus. 2 SHARED_BUS Shared bus for multiple nodes. 3 BROADCAST Global message broadcast. 4 PROPAGATE Directed message propagation. 12 BINARY Raw binary payload.","title":"Message Types"},{"location":"18_binarization/#compression","text":"Payloads can optionally be compressed using the zlib library. A single bit in the header indicates whether compression is applied. Compressed payloads reduce transmission size but may add slight computational overhead during encoding and decoding.","title":"Compression"},{"location":"18_binarization/#metadata-hivemeta","text":"HiveMeta is a reserved field for attaching arbitrary metadata to a message. The metadata is encoded as a byte array, prefixed by its size (in bytes). This allows for extensible features like routing hints or debug information.","title":"Metadata (HiveMeta)"},{"location":"18_binarization/#binary-message-structure","text":"The serialized binary message consists of a header and a payload. All fields are packed to maximize efficiency. The structure is as follows:","title":"Binary Message Structure"},{"location":"18_binarization/#header","text":"The header contains information about the protocol version, message type, compression, and metadata length. Field Size (bits) Description Start Marker 1 Always 1 . Helps align message boundaries. Versioned Flag 1 Indicates if protocol version is specified. Protocol Version 8 Protocol version (if Versioned Flag is 1 ). Message Type 5 Encoded message type. Compressed Flag 1 Indicates if payload is compressed. Metadata Length 8 Length of metadata in bytes.","title":"Header"},{"location":"18_binarization/#metadata","text":"Metadata is optional and encodes key-value pairs or other information. If present, it follows the header and is serialized as a byte array. The length of the metadata is specified in the header.","title":"Metadata"},{"location":"18_binarization/#payload","text":"The payload represents the core message data. Its format depends on the message type: For standard messages : Encoded as a UTF-8 JSON string. For binary messages : Encoded as raw bytes with an additional 4-bit unsigned integer indicating the binary payload type.","title":"Payload"},{"location":"18_binarization/#padding","text":"To ensure byte alignment, padding bits ( 0 ) are inserted as needed. The total length of the message must be a multiple of 8 bits.","title":"Padding"},{"location":"18_binarization/#encoding-process","text":"Start Marker : Add a single bit set to 1 to signify the start of the message. Header Fields : Add a 1-bit flag to indicate whether the protocol version is included. If the version is included, append the 8-bit protocol version number. Add a 5-bit message type field. Add a 1-bit flag to indicate compression status. Add an 8-bit metadata length field. Metadata : Serialize metadata as a JSON object (if any). Compress the metadata if compression is enabled. Append the serialized metadata. Payload : Serialize the payload according to the message type. Compress the payload if compression is enabled. Append the serialized payload. Padding : Add 0 bits as needed to ensure the total length is a multiple of 8 bits.","title":"Encoding Process"},{"location":"18_binarization/#decoding-process","text":"Alignment : Read bits until encountering the start marker ( 1 ). Header Fields : Read the Versioned Flag and determine if the protocol version is specified. If specified, read the 8-bit protocol version number. Read the 5-bit message type field. Read the Compressed Flag . Read the 8-bit metadata length field. Metadata : Read the specified number of bytes for metadata. Decompress if the Compressed Flag is set. Deserialize the metadata. Payload : Read the remaining bits as the payload. Decompress if the Compressed Flag is set. Deserialize the payload based on the message type.","title":"Decoding Process"},{"location":"18_binarization/#binary-payloads","text":"The protocol provides support for binary payloads, enabling the transmission of non-textual data. Binary payloads are handled based on their designated types, which instruct the HiveMind how to process the binary content. The binary payload type is indicated in the header as a 4 bit unsigned integer after the metadata and before the payload Value Type Description 0 UNDEFINED No information provided about the binary contents. 1 RAW_AUDIO Binary content is raw audio. 2 NUMPY_IMAGE Binary content is an image represented as a numpy array (e.g., webcam picture). 3 FILE Binary is a file to be saved; additional metadata is provided elsewhere. 4 STT_AUDIO_TRANSCRIBE Full audio sentence to perform Speech-to-Text (STT) and return transcripts. 5 STT_AUDIO_HANDLE Full audio sentence to perform STT and handle transcription immediately. 6 TTS_AUDIO Synthesized Text-to-Speech (TTS) audio to be played. \ud83d\udca1 this how how the microphone satellite streams audio to hivemind-listener","title":"Binary Payloads"},{"location":"18_binarization/#examples","text":"","title":"Examples"},{"location":"18_binarization/#serialized-message","text":"For a simple message with: Protocol version: 1 Message type: BUS No compression Metadata: {} Payload: {\"type\": \"speak\", \"data\":{\"utterance\": \"Hello\"}} The binary representation might look like this (in bit groups): 1 | 1 | 00000001 | 00001 | 0 | 00000000 | | Where: 1 (Start Marker) 1 (Versioned Flag) 00000001 (Protocol Version) 00001 (Message Type: BUS ) 0 (Compressed Flag) 00000000 (Metadata Length: 0 bytes) : Serialized metadata bytes. : Serialized payload bytes.","title":"Serialized Message"},{"location":"18_binarization/#binary-data","text":"For a binary payload with: Protocol version: 1 Message type: BINARY No compression Metadata: {} Binary Payload The binary representation might look like this (in bit groups): 1 | 1 | 00000001 | 00001 | 0 | 00000000 | | 0001 | Where: 1 (Start Marker) 1 (Versioned Flag) 00000001 (Protocol Version) 01100 (Message Type: BINARY ) 0 (Compressed Flag) 00000000 (Metadata Length: 0 bytes) : Serialized metadata bytes. 0001 (Binary Type: Raw audio) : audio bytes.","title":"Binary data"},{"location":"18_binarization/#more-examples","text":" | | | | | | | A binarized message 1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | | A unversioned binarized message 1 | 0 | XXXXX | X | XXXXXXXX | | A binary payload message 1 | 1 | XXXXXXXX | XXXXX | X | XXXXXXXX | | XXXX | ","title":"More examples"},{"location":"18_binarization/#compression-metrics","text":"Compression significantly reduces payload size for larger messages but is not always efficient for small messages. Benchmarks indicate a reduction of up to 50% for text-heavy payloads, while small payloads may see negligible benefits.","title":"Compression Metrics"},{"location":"18_binarization/#implementation-notes","text":"Bit-level operations are critical for compact encoding. Ensure precision when handling individual bits. Maintain strict alignment rules to avoid deserialization errors. Use a modular design to allow future extensions while retaining compatibility.","title":"Implementation Notes"},{"location":"19_crypto/","text":"Encryption HiveMind ensures secure communication between devices by using modern cryptographic techniques. This page provides an overview of how messages are encrypted in transit, the structure of encrypted messages, and the process of encryption key generation. End-to-End Encryption : Messages are encrypted on the sender's device and decrypted only on the receiver's device, ensuring complete confidentiality. Mutual Authentication : The identity verification step ensures that both devices share the same credentials and trust each other. Resistance to Replay Attacks : The use of unique IVs for each message prevents attackers from reusing captured messages. Strong Key Derivation : By leveraging PBKDF2 and a shared salt, HiveMind protects against brute-force and dictionary attacks. Terminology Before diving into the details, here are key terms used in this documentation: Plaintext : Unencrypted data, the original readable content before encryption. Ciphertext : Encrypted data, unreadable without the decryption key. AES : Advanced Encryption Standard, a symmetric encryption algorithm. GCM : Galois/Counter Mode, a mode of operation for AES that provides both encryption and message authentication. IV : Initialization vector (sometimes called nonce ), a unique value used to initialize encryption and ensure message uniqueness. MAC : Message Authentication Code (sometimes called tag or Integrity Check Value ( ICV )), used to verify the authenticity and integrity of a message. Salt : A random value used during key derivation to ensure unique and secure key generation, even with repeated passwords. SHA-2 : A family of cryptographic hash functions, used for generating hash values (e.g., SHA-256 is used in HiveMind). PBKDF2 : Password-Based Key Derivation Function 2, a cryptographic function that strengthens passwords by applying hashing multiple times to derive secure keys. Overview HiveMind uses AES-GCM (Advanced Encryption Standard in Galois/Counter Mode) for authenticated encryption of messages in transit. This method provides both confidentiality (protecting the message content) and integrity (ensuring that messages have not been tampered with during transmission). To securely exchange messages, HiveMind leverages a key derivation and exchange mechanism that avoids directly sharing sensitive credentials, ensuring secure key generation and agreement between devices. Encryption in Transit When two devices communicate over HiveMind, all messages are encrypted before being transmitted over the network. This prevents eavesdropping and ensures that any intercepted messages are unreadable without the appropriate key. Each encrypted message contains the following components: { \"ciphertext\": \"\", \"tag\": \"\", \"nonce\": \"\" } Ciphertext : The encrypted form of the original plaintext message. Tag : A message authentication code (MAC) that ensures the integrity and authenticity of the message. Nonce (IV) : A unique initialization vector used for encryption. It ensures that each message is encrypted uniquely, even if the same key and plaintext are reused. The nonce and tag are included in the message unencrypted, while the ciphertext remains confidential. This allows the receiving device to verify and decrypt the message. Key Generation and Exchange To securely encrypt and decrypt messages, HiveMind uses a shared secret key. This key is never transmitted directly but is derived independently by each device using the following steps: 1. Handshake and Identity Verification Each device generates a hash-based subkey (HSUB) using: A randomly generated initialization vector (IV) . The user's password (or pre-shared secret). A cryptographic hash function, such as SHA-256 . Devices exchange their HSUB and IV values over the network. Upon receiving the other's HSUB , each device regenerates it locally using the received IV and its own password. If the computed and received HSUB match, the devices verify each other's identity. 2. Deriving a Common Salt A shared salt is generated by XORing the IV values exchanged during the handshake: Salt = IV_A \u2295 IV_B This salt ensures that each session has a unique basis for key derivation. 3. Key Derivation Both devices derive a common secret key using the following inputs: The salt from the previous step. The user's password . The key derivation process uses PBKDF2 (Password-Based Key Derivation Function 2) with HMAC-SHA256 to produce a cryptographically strong key. This approach ensures that both devices independently derive the same encryption key without directly transmitting it over the network. Secure Message Exchange Once the secret key is derived, it is used to encrypt and decrypt all messages exchanged between devices. The process is as follows: Encryption : - The sending device uses the secret key to encrypt the plaintext message using AES-GCM. - The resulting ciphertext, along with the nonce and tag , is packaged into a message and sent over the network. Decryption : - The receiving device extracts the ciphertext , nonce , and tag from the message. - Using the same secret key, it decrypts the ciphertext and verifies the message integrity using the tag .","title":"Encryption"},{"location":"19_crypto/#encryption","text":"HiveMind ensures secure communication between devices by using modern cryptographic techniques. This page provides an overview of how messages are encrypted in transit, the structure of encrypted messages, and the process of encryption key generation. End-to-End Encryption : Messages are encrypted on the sender's device and decrypted only on the receiver's device, ensuring complete confidentiality. Mutual Authentication : The identity verification step ensures that both devices share the same credentials and trust each other. Resistance to Replay Attacks : The use of unique IVs for each message prevents attackers from reusing captured messages. Strong Key Derivation : By leveraging PBKDF2 and a shared salt, HiveMind protects against brute-force and dictionary attacks.","title":"Encryption"},{"location":"19_crypto/#terminology","text":"Before diving into the details, here are key terms used in this documentation: Plaintext : Unencrypted data, the original readable content before encryption. Ciphertext : Encrypted data, unreadable without the decryption key. AES : Advanced Encryption Standard, a symmetric encryption algorithm. GCM : Galois/Counter Mode, a mode of operation for AES that provides both encryption and message authentication. IV : Initialization vector (sometimes called nonce ), a unique value used to initialize encryption and ensure message uniqueness. MAC : Message Authentication Code (sometimes called tag or Integrity Check Value ( ICV )), used to verify the authenticity and integrity of a message. Salt : A random value used during key derivation to ensure unique and secure key generation, even with repeated passwords. SHA-2 : A family of cryptographic hash functions, used for generating hash values (e.g., SHA-256 is used in HiveMind). PBKDF2 : Password-Based Key Derivation Function 2, a cryptographic function that strengthens passwords by applying hashing multiple times to derive secure keys.","title":"Terminology"},{"location":"19_crypto/#overview","text":"HiveMind uses AES-GCM (Advanced Encryption Standard in Galois/Counter Mode) for authenticated encryption of messages in transit. This method provides both confidentiality (protecting the message content) and integrity (ensuring that messages have not been tampered with during transmission). To securely exchange messages, HiveMind leverages a key derivation and exchange mechanism that avoids directly sharing sensitive credentials, ensuring secure key generation and agreement between devices.","title":"Overview"},{"location":"19_crypto/#encryption-in-transit","text":"When two devices communicate over HiveMind, all messages are encrypted before being transmitted over the network. This prevents eavesdropping and ensures that any intercepted messages are unreadable without the appropriate key. Each encrypted message contains the following components: { \"ciphertext\": \"\", \"tag\": \"\", \"nonce\": \"\" } Ciphertext : The encrypted form of the original plaintext message. Tag : A message authentication code (MAC) that ensures the integrity and authenticity of the message. Nonce (IV) : A unique initialization vector used for encryption. It ensures that each message is encrypted uniquely, even if the same key and plaintext are reused. The nonce and tag are included in the message unencrypted, while the ciphertext remains confidential. This allows the receiving device to verify and decrypt the message.","title":"Encryption in Transit"},{"location":"19_crypto/#key-generation-and-exchange","text":"To securely encrypt and decrypt messages, HiveMind uses a shared secret key. This key is never transmitted directly but is derived independently by each device using the following steps:","title":"Key Generation and Exchange"},{"location":"19_crypto/#1-handshake-and-identity-verification","text":"Each device generates a hash-based subkey (HSUB) using: A randomly generated initialization vector (IV) . The user's password (or pre-shared secret). A cryptographic hash function, such as SHA-256 . Devices exchange their HSUB and IV values over the network. Upon receiving the other's HSUB , each device regenerates it locally using the received IV and its own password. If the computed and received HSUB match, the devices verify each other's identity.","title":"1. Handshake and Identity Verification"},{"location":"19_crypto/#2-deriving-a-common-salt","text":"A shared salt is generated by XORing the IV values exchanged during the handshake: Salt = IV_A \u2295 IV_B This salt ensures that each session has a unique basis for key derivation.","title":"2. Deriving a Common Salt"},{"location":"19_crypto/#3-key-derivation","text":"Both devices derive a common secret key using the following inputs: The salt from the previous step. The user's password . The key derivation process uses PBKDF2 (Password-Based Key Derivation Function 2) with HMAC-SHA256 to produce a cryptographically strong key. This approach ensures that both devices independently derive the same encryption key without directly transmitting it over the network.","title":"3. Key Derivation"},{"location":"19_crypto/#secure-message-exchange","text":"Once the secret key is derived, it is used to encrypt and decrypt all messages exchanged between devices. The process is as follows: Encryption : - The sending device uses the secret key to encrypt the plaintext message using AES-GCM. - The resulting ciphertext, along with the nonce and tag , is packaged into a message and sent over the network. Decryption : - The receiving device extracts the ciphertext , nonce , and tag from the message. - Using the same secret key, it decrypts the ciphertext and verifies the message integrity using the tag .","title":"Secure Message Exchange"},{"location":"TODO/","text":"wormhole node its actually 2 nodes node 1 drops messages in place X node 2 retrieves messages from place X messages are literal hive protocol messages X is any transport layer, literally anything nodes might not know each other at all as long as they know how to retrieve stuff the objective is hiding location any master that sees node 2 just thinks it is node 1! implementations: - usenet anon message boards (read # post # ) storage node any node can leave a payload in a storage node + associated proof optionally encrypted (recommended) a proof is a text string + same string encrypted with \"receiver pubkey\" any node can request any (encrypted) file the storage node will send the encrypted proof the node sends the decrypted proof if both match the node proves it is the receiver the storage node sends the file note: connections to these nodes should be ephemeral, ie, nodes disconnect once the deed is done note2: these can be public and hosted by random people, if you trust PGP rendevouz node a variation of the above, imagine a scenario with a very very large hive, maybe some nodes are even public or half way across the world node fires a \"query\" hive message message contains the address of a storage node the mind should drop the answer in the response doesnt need to travel all the way back optionally include node pubkey (may have been shared out of band) node checks the pre defined storage node every timestep until it receives an answer depending on relationship with storage node it might be possible to use events instead a storage node can be a http api (see http bridge TODO) where did the answer come from?","title":"TODO"},{"location":"TODO/#wormhole-node","text":"its actually 2 nodes node 1 drops messages in place X node 2 retrieves messages from place X messages are literal hive protocol messages X is any transport layer, literally anything nodes might not know each other at all as long as they know how to retrieve stuff the objective is hiding location any master that sees node 2 just thinks it is node 1! implementations: - usenet anon message boards (read # post # )","title":"wormhole node"},{"location":"TODO/#storage-node","text":"any node can leave a payload in a storage node + associated proof optionally encrypted (recommended) a proof is a text string + same string encrypted with \"receiver pubkey\" any node can request any (encrypted) file the storage node will send the encrypted proof the node sends the decrypted proof if both match the node proves it is the receiver the storage node sends the file note: connections to these nodes should be ephemeral, ie, nodes disconnect once the deed is done note2: these can be public and hosted by random people, if you trust PGP","title":"storage node"},{"location":"TODO/#rendevouz-node","text":"a variation of the above, imagine a scenario with a very very large hive, maybe some nodes are even public or half way across the world node fires a \"query\" hive message message contains the address of a storage node the mind should drop the answer in the response doesnt need to travel all the way back optionally include node pubkey (may have been shared out of band) node checks the pre defined storage node every timestep until it receives an answer depending on relationship with storage node it might be possible to use events instead a storage node can be a http api (see http bridge TODO) where did the answer come from?","title":"rendevouz node"},{"location":"gpt_bridges/","text":"Exploring HiveMind Web Chat Interface and Bridges: Extending AI Capabilities In the ever-expanding landscape of AI and interconnected systems, the HiveMind framework continues to push boundaries and open up new possibilities. As part of the HiveMind ecosystem, the HiveMind Web Chat Interface and HiveMind Bridges offer exciting avenues for integrating AI capabilities into various platforms and enabling seamless communication with AI assistants. In this blog post, we will delve into the world of HiveMind Bridges and explore how the HiveMind Web Chat Interface enhances user experiences. HiveMind Bridges: Connecting the Dots HiveMind Bridges serve as connectors between external platforms and the HiveMind network. These bridges act as terminals, enabling communication with the HiveMind infrastructure. With the support of various protocols such as Matrix, Mattermost Bridge , HackChat Bridge , DeltaChat Bridge , email, and more, HiveMind Bridges extend the reach of AI assistants and allow them to interact with users through familiar channels. Each bridge behaves like a secure intermediary, ensuring the safety and privacy of communications. They maintain their own session and permissions, allowing them to answer specific users or adhere to custom rules defined within the bridge. This flexibility makes it possible to integrate AI assistants seamlessly into existing communication platforms, expanding their capabilities and enhancing user interactions. HiveMind Web Chat Interface: Unleashing AI Potential The HiveMind Webchat Interface, powered by HiveMindJs provides a user-friendly and versatile solution for connecting to the HiveMind network. This JavaScript library enables direct communication with the HiveMind infrastructure when access keys are available in the browser environment. For instance, a login page with HiveMind access keys can leverage HiveMindJS to establish a secure connection, granting users access to AI functionalities seamlessly. However, there may be situations where exposing HiveMind login keys in the browser is not desirable for security reasons. In such cases, a HiveMind Bridge comes into play. Acting as a middle layer, the bridge node safely connects to the HiveMind network on a server , while the browser interacts solely with the bridge. This architecture ensures that sensitive information remains protected, and communication with the HiveMind is conducted securely. Integrating a Chatbot with Existing Business Platforms Let's consider a practical example of leveraging the HiveMind ecosystem to integrate a chatbot into an existing business platform. Suppose you have a thriving online platform where users engage with your products or services. By hosting HiveMind-Core, Ovos-Core, and a HiveMind Bridge, you can seamlessly integrate a chatbot powered by AI into your platform. The HiveMind Bridge, acting as the intermediary, facilitates communication between your platform and the HiveMind network. Users can interact with the chatbot, ask questions, seek assistance, or perform specific actions directly from within your platform. The chatbot, backed by the extensive capabilities of the HiveMind infrastructure, can provide personalized responses, offer recommendations, and enhance user experiences. By incorporating a chatbot into your existing platform, you streamline customer support, automate certain processes, and deliver a more interactive and efficient user experience. The HiveMind ecosystem, with its powerful AI capabilities and flexible bridges, empowers businesses to leverage AI technologies seamlessly, unlocking new opportunities for growth and innovation. Conclusion The HiveMind Web Chat Interface and HiveMind Bridges revolutionize the way AI assistants integrate into diverse platforms. Through bridges, AI systems gain access to popular communication channels, while the HiveMind Web Chat Interface facilitates direct communication with the HiveMind network. With the ability to securely connect to the HiveMind infrastructure and extend AI capabilities, businesses can create immersive, interactive, and intelligent experiences for their users. As the HiveMind ecosystem continues to evolve, we anticipate even more innovative use cases and seamless integrations. The future holds immense potential for expanding AI's reach and enhancing human-AI collaboration. With","title":"Exploring HiveMind Web Chat Interface and Bridges: Extending AI Capabilities"},{"location":"gpt_bridges/#exploring-hivemind-web-chat-interface-and-bridges-extending-ai-capabilities","text":"In the ever-expanding landscape of AI and interconnected systems, the HiveMind framework continues to push boundaries and open up new possibilities. As part of the HiveMind ecosystem, the HiveMind Web Chat Interface and HiveMind Bridges offer exciting avenues for integrating AI capabilities into various platforms and enabling seamless communication with AI assistants. In this blog post, we will delve into the world of HiveMind Bridges and explore how the HiveMind Web Chat Interface enhances user experiences.","title":"Exploring HiveMind Web Chat Interface and Bridges: Extending AI Capabilities"},{"location":"gpt_bridges/#hivemind-bridges-connecting-the-dots","text":"HiveMind Bridges serve as connectors between external platforms and the HiveMind network. These bridges act as terminals, enabling communication with the HiveMind infrastructure. With the support of various protocols such as Matrix, Mattermost Bridge , HackChat Bridge , DeltaChat Bridge , email, and more, HiveMind Bridges extend the reach of AI assistants and allow them to interact with users through familiar channels. Each bridge behaves like a secure intermediary, ensuring the safety and privacy of communications. They maintain their own session and permissions, allowing them to answer specific users or adhere to custom rules defined within the bridge. This flexibility makes it possible to integrate AI assistants seamlessly into existing communication platforms, expanding their capabilities and enhancing user interactions.","title":"HiveMind Bridges: Connecting the Dots"},{"location":"gpt_bridges/#hivemind-web-chat-interface-unleashing-ai-potential","text":"The HiveMind Webchat Interface, powered by HiveMindJs provides a user-friendly and versatile solution for connecting to the HiveMind network. This JavaScript library enables direct communication with the HiveMind infrastructure when access keys are available in the browser environment. For instance, a login page with HiveMind access keys can leverage HiveMindJS to establish a secure connection, granting users access to AI functionalities seamlessly. However, there may be situations where exposing HiveMind login keys in the browser is not desirable for security reasons. In such cases, a HiveMind Bridge comes into play. Acting as a middle layer, the bridge node safely connects to the HiveMind network on a server , while the browser interacts solely with the bridge. This architecture ensures that sensitive information remains protected, and communication with the HiveMind is conducted securely.","title":"HiveMind Web Chat Interface: Unleashing AI Potential"},{"location":"gpt_bridges/#integrating-a-chatbot-with-existing-business-platforms","text":"Let's consider a practical example of leveraging the HiveMind ecosystem to integrate a chatbot into an existing business platform. Suppose you have a thriving online platform where users engage with your products or services. By hosting HiveMind-Core, Ovos-Core, and a HiveMind Bridge, you can seamlessly integrate a chatbot powered by AI into your platform. The HiveMind Bridge, acting as the intermediary, facilitates communication between your platform and the HiveMind network. Users can interact with the chatbot, ask questions, seek assistance, or perform specific actions directly from within your platform. The chatbot, backed by the extensive capabilities of the HiveMind infrastructure, can provide personalized responses, offer recommendations, and enhance user experiences. By incorporating a chatbot into your existing platform, you streamline customer support, automate certain processes, and deliver a more interactive and efficient user experience. The HiveMind ecosystem, with its powerful AI capabilities and flexible bridges, empowers businesses to leverage AI technologies seamlessly, unlocking new opportunities for growth and innovation.","title":"Integrating a Chatbot with Existing Business Platforms"},{"location":"gpt_bridges/#conclusion","text":"The HiveMind Web Chat Interface and HiveMind Bridges revolutionize the way AI assistants integrate into diverse platforms. Through bridges, AI systems gain access to popular communication channels, while the HiveMind Web Chat Interface facilitates direct communication with the HiveMind network. With the ability to securely connect to the HiveMind infrastructure and extend AI capabilities, businesses can create immersive, interactive, and intelligent experiences for their users. As the HiveMind ecosystem continues to evolve, we anticipate even more innovative use cases and seamless integrations. The future holds immense potential for expanding AI's reach and enhancing human-AI collaboration. With","title":"Conclusion"},{"location":"gpt_eli5/","text":"ELI5 Imagine you and your friends want to play a game together, but you're all in different rooms. To make it possible to play together, you need to follow some rules and communicate with each other. That's what the HiveMind protocol does for voice assistants. The HiveMind protocol is like a set of rules that voice assistants use to talk to each other and work together. It helps them understand each other and coordinate their actions. Just like in the game, the voice assistants need to agree on how they will communicate and what they can do together. For example, let's say you and your friend have voice assistants. They use the HiveMind protocol to talk to each other. Your voice assistant can send messages to your friend's voice assistant, and vice versa. These messages can be commands, questions, or even just saying \"hello.\" The HiveMind protocol also helps the voice assistants recognize each other. They have special identities that let them know who is who. It's like having name tags so they can say, \"Hi, I'm Voice Assistant A!\" or \"Nice to meet you, Voice Assistant B!\" To follow the HiveMind protocol, the voice assistants use a special language that they all understand. It's like having a secret code that only they know. This language helps them communicate in a way that makes sense to them. The HiveMind protocol also includes some safety measures. Just like you have rules to keep you safe when playing games, the voice assistants have rules too. They make sure that only trusted voice assistants can join and participate. They use things like passwords and encryption to keep things secure. So, the HiveMind protocol is like a set of rules and a special language that voice assistants use to talk to each other, understand each other, and work together. It helps them play the game of being voice assistants in a fun and safe way!","title":"ELI5"},{"location":"gpt_eli5/#eli5","text":"Imagine you and your friends want to play a game together, but you're all in different rooms. To make it possible to play together, you need to follow some rules and communicate with each other. That's what the HiveMind protocol does for voice assistants. The HiveMind protocol is like a set of rules that voice assistants use to talk to each other and work together. It helps them understand each other and coordinate their actions. Just like in the game, the voice assistants need to agree on how they will communicate and what they can do together. For example, let's say you and your friend have voice assistants. They use the HiveMind protocol to talk to each other. Your voice assistant can send messages to your friend's voice assistant, and vice versa. These messages can be commands, questions, or even just saying \"hello.\" The HiveMind protocol also helps the voice assistants recognize each other. They have special identities that let them know who is who. It's like having name tags so they can say, \"Hi, I'm Voice Assistant A!\" or \"Nice to meet you, Voice Assistant B!\" To follow the HiveMind protocol, the voice assistants use a special language that they all understand. It's like having a secret code that only they know. This language helps them communicate in a way that makes sense to them. The HiveMind protocol also includes some safety measures. Just like you have rules to keep you safe when playing games, the voice assistants have rules too. They make sure that only trusted voice assistants can join and participate. They use things like passwords and encryption to keep things secure. So, the HiveMind protocol is like a set of rules and a special language that voice assistants use to talk to each other, understand each other, and work together. It helps them play the game of being voice assistants in a fun and safe way!","title":"ELI5"}]} \ No newline at end of file diff --git a/search/worker.js b/search/worker.js new file mode 100644 index 0000000..8628dbc --- /dev/null +++ b/search/worker.js @@ -0,0 +1,133 @@ +var base_path = 'function' === typeof importScripts ? '.' : '/search/'; +var allowSearch = false; +var index; +var documents = {}; +var lang = ['en']; +var data; + +function getScript(script, callback) { + console.log('Loading script: ' + script); + $.getScript(base_path + script).done(function () { + callback(); + }).fail(function (jqxhr, settings, exception) { + console.log('Error: ' + exception); + }); +} + +function getScriptsInOrder(scripts, callback) { + if (scripts.length === 0) { + callback(); + return; + } + getScript(scripts[0], function() { + getScriptsInOrder(scripts.slice(1), callback); + }); +} + +function loadScripts(urls, callback) { + if( 'function' === typeof importScripts ) { + importScripts.apply(null, urls); + callback(); + } else { + getScriptsInOrder(urls, callback); + } +} + +function onJSONLoaded () { + data = JSON.parse(this.responseText); + var scriptsToLoad = ['lunr.js']; + if (data.config && data.config.lang && data.config.lang.length) { + lang = data.config.lang; + } + if (lang.length > 1 || lang[0] !== "en") { + scriptsToLoad.push('lunr.stemmer.support.js'); + if (lang.length > 1) { + scriptsToLoad.push('lunr.multi.js'); + } + if (lang.includes("ja") || lang.includes("jp")) { + scriptsToLoad.push('tinyseg.js'); + } + for (var i=0; i < lang.length; i++) { + if (lang[i] != 'en') { + scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); + } + } + } + loadScripts(scriptsToLoad, onScriptsLoaded); +} + +function onScriptsLoaded () { + console.log('All search scripts loaded, building Lunr index...'); + if (data.config && data.config.separator && data.config.separator.length) { + lunr.tokenizer.separator = new RegExp(data.config.separator); + } + + if (data.index) { + index = lunr.Index.load(data.index); + data.docs.forEach(function (doc) { + documents[doc.location] = doc; + }); + console.log('Lunr pre-built index loaded, search ready'); + } else { + index = lunr(function () { + if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { + this.use(lunr[lang[0]]); + } else if (lang.length > 1) { + this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility + } + this.field('title'); + this.field('text'); + this.ref('location'); + + for (var i=0; i < data.docs.length; i++) { + var doc = data.docs[i]; + this.add(doc); + documents[doc.location] = doc; + } + }); + console.log('Lunr index built, search ready'); + } + allowSearch = true; + postMessage({config: data.config}); + postMessage({allowSearch: allowSearch}); +} + +function init () { + var oReq = new XMLHttpRequest(); + oReq.addEventListener("load", onJSONLoaded); + var index_path = base_path + '/search_index.json'; + if( 'function' === typeof importScripts ){ + index_path = 'search_index.json'; + } + oReq.open("GET", index_path); + oReq.send(); +} + +function search (query) { + if (!allowSearch) { + console.error('Assets for search still loading'); + return; + } + + var resultDocuments = []; + var results = index.search(query); + for (var i=0; i < results.length; i++){ + var result = results[i]; + doc = documents[result.ref]; + doc.summary = doc.text.substring(0, 200); + resultDocuments.push(doc); + } + return resultDocuments; +} + +if( 'function' === typeof importScripts ) { + onmessage = function (e) { + if (e.data.init) { + init(); + } else if (e.data.query) { + postMessage({ results: search(e.data.query) }); + } else { + console.error("Worker - Unrecognized message: " + e); + } + }; +} diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..b67bedf --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,119 @@ + + + + https://jarbashivemind.github.io/HiveMind-community-docs/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/00_index/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/01_quickstart/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/02_terminology/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/03_pairing/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/04_plugins/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/04_protocol/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/05_presence/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/06_skills_server/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/06_sound_server/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/07_homeassistant/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/07_micsat/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/07_voice_relay/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/07_voicesat/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/08_persona/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/09_matrix/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/10_deltachat/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/11_devs/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/12_handshake/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/13_mycroft/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/14_localhive/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/15_nested/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/16_permissions/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/17_database/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/18_binarization/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/19_crypto/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/TODO/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/gpt_bridges/ + 2024-12-24 + + + https://jarbashivemind.github.io/HiveMind-community-docs/gpt_eli5/ + 2024-12-24 + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 0000000..2eefb04 Binary files /dev/null and b/sitemap.xml.gz differ