index.md: Remove duplicate "TF LAVA Instance" section

This section appeared twice, closer to the beginning vs closer to the
end of doc. Location at the beginning of doc is more logical - we
should describe the official production instance before decsribing
hot to add new boards to it or setup a local development copy.
However, 2nd copy appeared to have some more up-to-date changes, so
these changes were applied to 1st copy to not be lost.

More info at the TFC-187 ticket.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Change-Id: Ia666b47296aa14e362a15bdb37181e9ac5843142
diff --git a/docs/index.md b/docs/index.md
index 0610e63..118247b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -602,9 +602,7 @@
 
 # TF LAVA Instance
 
-The TF LAVA instance can be found at http://tf.validation.linaro.org.
-
-LAVA instance for the Trusted Firmware project is set up in Linaro Harston LAB. It consists of lava-master running on a hosted bare metal server, lava-dispatcher running on the same server. Additional dispatchers are deployed using Raspberry Pi 4 hardware. Note that this is required by certain classes of hardware not being differentiable before the OS boots, therefore a single device per dispatcher allows addressing these devices via an unique dispatcher (per device).
+The [LAVA instance](https://tf.validation.linaro.org) for the Trusted Firmware project is set up in Linaro Harston LAB. It consists of lava-master running on a hosted bare metal server, lava-dispatcher running on the same server. Additional dispatchers are deployed using Raspberry Pi 4 hardware. Note that this is required by certain classes of hardware not being differentiable before the OS boots, therefore a single device per dispatcher allows addressing these devices via an unique dispatcher (per device).
 
 TF LAVA instance settings are stored in salt and ansible repositories:
 * Salt repository: https://git.linaro.org/lava/lava-lab.git/ 
@@ -613,7 +611,7 @@
    * https://git.linaro.org/lab-cambridge/lab-dns.git/
    * https://git.linaro.org/lab-cambridge/lab-dhcp.git/
 
-# TF LAVA instance replication
+## TF LAVA instance replication
 
 TF instance partially relies on Linaro infrastructure. Linaro’s login service (based on LDAP) is used for users authentication and logging into the TF LAVA instance. Therefore it’s not possible to replicate identical LAVA instance accounts outside of Linaro’s infrastructure. Apart from that, all configurations are stored in salt or ansible repositories. Replicating the remaining part of the instance can be done using salt and ansible tools with a new set of inventory variables.
 
@@ -680,7 +678,8 @@
 
 ## Current list of available devices
 
-Up-to-date list of devices is available from the LAVA web UI. A simplified view shows only the device types. Currently, TF LAVA instance has Juno, MPS2, Musca B1 and QEMU devices.
+Up-to-date list of devices is available from the [LAVA web UI](https://tf.validation.linaro.org/scheduler/alldevices/active). A [simplified view](https://tf.validation.linaro.org/scheduler/) shows only the device types. Currently, TF LAVA instance has Juno, MPS2, Musca B1 and QEMU devices.
+
 
 # Local LAVA instance set up
 
@@ -831,81 +830,6 @@
 1. Add Watchers
    * It's probably a good idea to add Don Harbin to the Watchers on the ticket.
 
-# TF LAVA instance
-The [LAVA instance](https://tf.validation.linaro.org) for the Trusted Firmware project is set up in Linaro Harston LAB. It consists of lava-master running on a hosted bare metal server, lava-dispatcher running on the same server. Additional dispatchers are deployed using Raspberry Pi 4 hardware. Note that this is required by certain classes of hardware not being differentiable before the OS boots, therefore a single device per dispatcher allows addressing these devices via an unique dispatcher (per device).
-
-TF LAVA instance settings are stored in salt and ansible repositories:
-* Salt repository: https://git.linaro.org/lava/lava-lab.git/ 
-* Ansible repositories:
-   * https://git.linaro.org/lab-cambridge/ansible-lab.git/
-   * https://git.linaro.org/lab-cambridge/lab-dns.git/
-   * https://git.linaro.org/lab-cambridge/lab-dhcp.git/
-
-## TF LAVA instance replication
-
-TF instance partially relies on Linaro infrastructure. Linaro’s login service (based on LDAP) is used for users authentication and logging into the TF LAVA instance. Therefore it’s not possible to replicate identical LAVA instance accounts outside of Linaro’s infrastructure. Apart from that, all configurations are stored in salt or ansible repositories. Replicating the remaining part of the instance can be done using salt and ansible tools with a new set of inventory variables.
-
-Before an instance is ready various ansible playbooks need to be run and, for LAVA set ups, salt needs to be run.
-
-For ansible, you need to go on deb-ansible host (ssh root@192.168.128.15). As root:
-
-```
-	# (cd /srv/lava-lab; git pull)
-	# cd /etc/ansible/playbooks
-	# ansible-playbook -i ../inventory/tf lava-lab.yml 
-```
-
-The following playbooks are used to configure all the relevant parts:
-* lab_sssd_auth.yml file: enable LDAP authentication
-* lab_snmp_enable.yml file:  enable SNMP, and non-free/contrib apt sources (needed for working SNMP set up with APC PDUs )
-* lab_docker.yml file: install docker apt repository and docker service itself
-* lab_aws_client.yml file: enable AWS authentication with AWS to preload docker images
-* lab_lava_repo.yml file: add LAVA apt repository
-* dhcp_tf.yml file: for the static leases and general DHCP server configuration
-
-Installing LAVA ( worker and master ) is a manual process. After that, the lava-lab.yml file takes care of setting up the correct device dictionaries, device types and health checks as configured in the separate [lava-lab repository](https://git.linaro.org/lava/lava-lab.git).
-
-Until the salt migration to ansible is complete you will need to go on tf-master.tflab host (ssh root@10.88.16.10). As root:
-```
-	# (cd /srv/lava-lab; git pull)
-	# salt ‘*’ state.highstate
-```
-Note: on a brand new installation, you will need to run the ‘salt’ command twice. It’s due to an ordering problem in the salt state configuration. It will be fixed by the ansible migration.
-
-## LAVA Master
-
-LAVA Master and dispatchers run the Debian distribution (at the time of writing, Debian 10 Buster). LAVA packages are installed from apt.lavasoftware.org repository. On top of the basic installation, LAB specific configuration is applied with ansible.
-Note: the installation of lava-server is a manual process (and still a work in progress), while other configurations are automated and described in the ansible playbooks above.
-
-## LAVA Dispatchers
-
-TF instance uses 2 types of dispatchers:
-* x86 dispatcher running on the same hardware as LAVA master. This dispatcher hosts Fast Models (FVP), QEMU, and Juno devices.
-* Arm dispatchers running on Raspberry Pi 4 hardware. This dispatcher hosts MPS2 and Musca B1 devices.
-
-LAVA dispatchers setup is described in the LAVA documentation: https://lava.readthedocs.io/en/latest/admin/advanced-tutorials/deploying-rpi4b-as-worker/ 
-
-## Upgrades
-
-Upgrades of LAVA software are performed after each LAVA release. All dispatchers and master have to run the same version of LAVA software.
-
-## LAVA instance changes
-
-All the changes are done by the LAB staff. They should be requested as Jira tickets (https://projects.linaro.org/secure/CreateIssue.jspa) with the following fields:
-* Project: LSS (LAB & System Software)
-* Type: Ticket
-* Component: LAB
-* Client Stakeholder: Trusted Firmware
-
-Most common cases where ticket is required include:
-* Adding new device to the LAVA instance
-* Changing firmware on the boards that require manual action
-* Adding or lifting access limitations
-
-## Current list of available devices
-
-Up-to-date list of devices is available from the [LAVA web UI](https://tf.validation.linaro.org/scheduler/alldevices/active). A [simplified view](https://tf.validation.linaro.org/scheduler/) shows only the device types. Currently, TF LAVA instance has Juno, MPS2, Musca B1 and QEMU devices.
-
 # SQUAD
 
 SQUAD is a database storing test results from LAVA jobs and providing a dashboard where results can be compared across CI jobs and metrics generated.