1
0
Fork 0
mirror of https://github.com/sdr-enthusiasts/docker-adsb-ultrafeeder.git synced 2024-10-16 05:50:44 +00:00

linting / pre-commit

This commit is contained in:
Fred Clausen 2023-10-10 09:25:53 -06:00
parent f7ec26234d
commit 4657b07fce
20 changed files with 411 additions and 323 deletions

1
.dictionary.txt Normal file
View file

@ -0,0 +1 @@
crate

View file

@ -2,10 +2,10 @@ name: Cancelling Duplicates
on:
workflow_run:
workflows:
- 'Deploy to Docker Hub'
- 'Check Linting'
- 'Tests'
types: ['requested']
- "Deploy to Docker Hub"
- "Check Linting"
- "Tests"
types: ["requested"]
jobs:
cancel-duplicate-workflow-runs:
@ -18,4 +18,3 @@ jobs:
cancelMode: allDuplicates
token: ${{ secrets.GITHUB_TOKEN }}
sourceRunId: ${{ github.event.workflow_run.id }}

View file

@ -0,0 +1,23 @@
name: Update pre-commit hooks
on:
workflow_dispatch:
schedule:
- cron: 0 0 * * *
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4.1.0
with:
fetch-depth: 0
- uses: vrslev/pre-commit-autoupdate@v1.0.0
- uses: peter-evans/create-pull-request@v5
with:
branch: pre-commit-autoupdate
title: "chore(deps): Update pre-commit hooks"
commit-message: "chore(deps): Update pre-commit hooks"
body: Update pre-commit hooks
labels: dependencies
delete-branch: True

65
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,65 @@
repos:
# lint yaml, line and whitespace
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- id: requirements-txt-fixer
- id: mixed-line-ending
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
# lint the dockerfiles
- repo: https://github.com/hadolint/hadolint
rev: v2.12.1-beta
hooks:
- id: hadolint
# prettier
- repo: https://github.com/pre-commit/mirrors-prettier
rev: "v3.0.3" # Use the sha / tag you want to point at
hooks:
- id: prettier
types_or: [file, bash, sh, javascript, jsx, ts, tsx]
additional_dependencies:
- prettier@2.5.1
exclude: ^(Dockerfile*)
- repo: https://github.com/codespell-project/codespell.git
rev: "v2.2.5" # Use the sha / tag you want to point at
hooks:
- id: codespell
types: [text]
args: [--ignore-words=.dictionary.txt]
exclude: ^(Dockerfile*)
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.9.0.6
hooks:
- id: shellcheck
- repo: https://github.com/sirosen/check-jsonschema
rev: 0.27.0
hooks:
- id: check-github-actions
- id: check-github-workflows
- repo: https://github.com/doublify/pre-commit-rust
rev: v1.0
hooks:
- id: fmt
- id: cargo-check
# lint python formatting
- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: "6.1.0" # pick a git hash / tag to point to
hooks:
- id: flake8
args: ["--extend-ignore=W503,W504,E501"]

View file

@ -43,7 +43,7 @@ services:
Prometheus will store a lot of data, and Grafana will do a lot of data queries. As a result, it would be better if you run these containers on a different system than your feeder Raspberry Pi. This will leave your Pi focused on data collection and processing, and unbothered by the CPU and Disk IO load that Prometheus/Grafana will cause.
You *can* do it on a single system. We're assuming below that you are not. If you do it on a single system, then you can combine the `docker-compose.yml` components in a single file
You _can_ do it on a single system. We're assuming below that you are not. If you do it on a single system, then you can combine the `docker-compose.yml` components in a single file
## Steps to install Prometheus, Grafana, and the Grafana Dashboard
@ -52,10 +52,10 @@ You *can* do it on a single system. We're assuming below that you are not. If yo
- Edit your Ultrafeeder's `docker-compose.yml` file and ensure that the following is set for the `ultrafeeder` service:
```yaml
environment:
environment:
- PROMETHEUS_ENABLE=true
- TAR1090_ENABLE_AC_DB=true
ports:
ports:
- 9273-9274:9273-9274
```
@ -71,7 +71,7 @@ cd /opt/grafana
cat > docker-compose.yml
```
Now paste in the following text *):
Now paste in the following text \*):
<details>
<summary>&lt;&dash;&dash; Click the arrow to see the <code>docker-compose.yml</code> text</summary>
@ -149,7 +149,7 @@ services:
</details>
*) The volume definition structure is written this way purposely to ensure that the containers can place files in the persistent directories. Do not try to "directly" map volumes (`/opt/grafana/grafana/appdata:/var/lib/grafana`).
\*) The volume definition structure is written this way purposely to ensure that the containers can place files in the persistent directories. Do not try to "directly" map volumes (`/opt/grafana/grafana/appdata:/var/lib/grafana`).
You should be able to see the following directories:
@ -178,9 +178,9 @@ docker compose up -d
This will add the following to the bottom of the `prometheus.xml` file:
```yaml
- job_name: 'ultrafeeder'
- job_name: "ultrafeeder"
static_configs:
- targets: ['ip_xxxxxxx:9273', 'ip_xxxxxxx:9274']
- targets: ["ip_xxxxxxx:9273", "ip_xxxxxxx:9274"]
```
(If you screw this up, **do NOT** re-run the command. Instead, try `sudo nano /opt/grafana/prometheus/config/prometheus.yml` and fix it that way.)
@ -198,9 +198,9 @@ docker compose up -d
This will add the following to the bottom of the `prometheus.xml` file:
```yaml
- job_name: 'dump978'
- job_name: "dump978"
static_configs:
- targets: ['ip_xxxxxxx:9274']
- targets: ["ip_xxxxxxx:9274"]
```
(If you screw this up, **do NOT** re-run the command. Instead, try `sudo nano /opt/grafana/prometheus/config/prometheus.yml` and fix it that way.)
@ -222,10 +222,10 @@ After you have logged into the `grafana` console the following manual steps are
2. Click `Prometheus` from the list of options provided
3. Input or select the following options, if the option is not listed, do not input anything for that option:
Option | Input
------------- | -------------
Name | ultrafeeder
URL | `http://prometheus:9090/`
| Option | Input |
| ------ | ------------------------- |
| Name | ultrafeeder |
| URL | `http://prometheus:9090/` |
Clicking `Save & Test` should return a green message indicating success. The dashboard can now be imported with the following steps:
@ -261,12 +261,12 @@ First execute all steps above, and then continue here.
### Step 1: Edit your Prometheus config file so the `job_name`s look like this
```yaml
- job_name: 'heerlen'
- job_name: "heerlen"
static_configs:
- targets: ['10.0.0.100:9273', '10.0.0.100:9274']
- job_name: 'trenton'
- targets: ["10.0.0.100:9273", "10.0.0.100:9274"]
- job_name: "trenton"
static_configs:
- targets: ['10.0.0.101:9273', '10.0.0.101:9274']
- targets: ["10.0.0.101:9273", "10.0.0.101:9274"]
```
Here, `10.0.0.100` is the IP address of the `heerlen` station, and `10.0.0.101` is the IP address of the `trenton` station. Yours will be different. Please keep the ports as you mapped them for Ultrafeeder in each instance. You should have a `- job_name` block for each ultrafeeder instance.

105
README.md
View file

@ -110,7 +110,7 @@ The general principle behind the port numbering, is:
- `80` contains the Tar1090 web interface
| Port | Details |
|------|---------|
| --------------------------- | -------------------------------------------------- |
| `30001/tcp` | Raw protocol input |
| `30002/tcp` | Raw protocol output |
| `30003/tcp` | SBS/Basestation protocol output |
@ -130,7 +130,7 @@ The general principle behind the port numbering, is:
Any of these ports can be made available to the host system by using the `ports:` directive in your `docker-compose.yml`. The container's web interface is rendered to port `80` in the container. This can me mapped to a port on the host using the docker-compose `ports` directive. In the example [`docker-compose.yml`](docker-compose.yml) file, the container's Tar1090 interface is mapped to `8080` on the host system, and ports `9273-9274` are exposed as-is:
```yaml
ports:
ports:
- 8080:80 # to expose the web interface
- 9273-9274:9273-9274 # to expose the statistics interface to Prometheus
```
@ -161,10 +161,10 @@ Note:
You need to make sure that the USB device can be accessed by the container. The best way to do so, is by adding the following to your `docker-compose.yml` file:
```yaml
device_cgroup_rules:
- 'c 189:* rwm'
...
volumes:
device_cgroup_rules:
- "c 189:* rwm"
---
volumes:
- /dev:/dev:ro
```
@ -177,7 +177,7 @@ The advantage of doing this (over simply adding a `device:` directive pointing a
The following parameters must be set (mandatory) for the container to function:
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ---------------------- | -------------------------------------------------------------------------------------------------------------- | ------- |
| `LAT` or `READSB_LAT` | The latitude of your antenna. Use either parameter, but not both | |
| `LONG` or `READSB_LON` | The longitude of your antenna. Use either parameter, but not both | |
| `ALT` or `READSB_ALT` | The altitude of your antenna above sea level. For example, `15m` or `45ft` | |
@ -186,7 +186,7 @@ The following parameters must be set (mandatory) for the container to function:
##### Optional Parameters
| Variable | Description | Controls which `readsb` option | Default |
|----------|-------------|--------------------------------|---------|
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | --------- |
| `ENABLE_TIMELAPSE1090` | Optional / Legacy. Set to `true` to enable timelapse1090. Once enabled, can be accessed via <http://dockerhost:port/timelapse/>. | Unset |
| `READSB_EXTRA_ARGS` | Optional, allows to specify extra parameters for readsb | Unset |
| `READSB_DEBUG` | Optional, used to set debug mode. `n`: network, `P`: CPR, `S`: speed check | Unset |
@ -221,7 +221,7 @@ If you want to connect your SDR to the container, here's how to do that:
##### Mandatory parameters
| Variable | Description | Controls which `readsb` option | Default |
|----------|-------------|--------------------------------|---------|
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | -------------- |
| `READSB_DEVICE_TYPE` | If using an SDR, set this to `rtlsdr`, `modesbeast`, `gnshulc` depending on the model of your SDR. If not using an SDR, leave un-set. | `--device-type=<type>` | Unset |
| `READSB_RTLSDR_DEVICE` | Select device by serial number. | `--device=<serial>` | Unset |
| `READSB_BEAST_SERIAL` | only when type `modesbeast` or `gnshulc` is used: Path to Beast serial device. | `--beast-serial=<path>` | `/dev/ttyUSB0` |
@ -229,7 +229,7 @@ If you want to connect your SDR to the container, here's how to do that:
##### Optional/Additional Parameters
| Variable | Description | Controls which `readsb` option | Default |
|----------|-------------|--------------------------------|---------|
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | -------- |
| `READSB_GAIN` | Set gain (in dB). Use `autogain` to have the container determine an appropriate gain, more on this below. | `--gain=<db>` | Max gain |
| `READSB_RTLSDR_PPM` | Set oscillator frequency correction in PPM. See [Estimating PPM](https://github.com/sdr-enthusiasts/docker-readsb-protobuf/#estimating-ppm) | `--ppm=<correction>` | Unset |
@ -251,7 +251,7 @@ We recommend running the initial period during times when there are a lot of pla
Although not recommended, you can change the measurement intervals and low/high cutoffs with these parameters:
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------- |
| `READSB_AUTOGAIN_INITIAL_TIMEPERIOD` | How long the Initial Time Period should last (in seconds) | `7200` |
| `READSB_AUTOGAIN_INITIAL_INTERVAL` | The measurement interval to optimize gain during the initial period of 90 minutes (in seconds) | `300` |
| `READSB_AUTOGAIN_SUBSEQUENT_INTERVAL` | The measurement interval to optimize gain during the subsequent period (in seconds) | `86400` |
@ -261,7 +261,7 @@ Although not recommended, you can change the measurement intervals and low/high
If you need to reset AutoGain and start over determining the gain, you can do so with this command:
```bash
docker exec -it ultrafeeder /usr/local/bin/autogain1090 reset
docker exec -it ultrafeeder /usr/local/bin/autogain1../sdr-e-base-repo-setup/.pre-commit-config.yaml090 reset
```
#### Connecting to external ADSB data sources
@ -311,7 +311,7 @@ In the above configuration strings:
##### Networking parameters
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `BEASTHOST` | IP/Hostname of a Mode-S/Beast provider (`dump1090`/`readsb`) | |
| `BEASTPORT` | TCP port number of Mode-S/Beast provider (`dump1090`/`readsb`) | `30005` |
| `MLATHOST` | Legacy parameter. IP/Hostname of an MLAT provider (`mlat-client`). Note - using this parameter will not make the MLAT data part of the consolidated mlathub. The preferred way of ingesting MLAT results is using the `mlathub` functionality of the container, see below for details | |
@ -322,14 +322,14 @@ In the above configuration strings:
There are several aggregators, both non-profit and commercial, that can directly be sent data from ultrafeeder without the need for an additional feeder container. We have added them in the example `docker-compose.yml` snippet above. Here is a partial list of these aggregators. All of them use the `beast_reduce_plus` format for feeding ADSB data, and `mlat-client` for feeding MLAT:
| Name | (C)ommercial/<br/>(N)on-profit | Description | Feed details |
|------|---------------------------|-------------|--------------|
| Airplanes.live | N | Run by volunteers that used to be related to adsbexchange | adsb:`feed.airplanes.live` port `30004`<br/>mlat: `feed.airplanes.live` port `31090`|
| ADSB.fi | N | Run by volunteers that used to be related to adsbexchange | adsb:`feed.adsb.fi` port `30004`<br/>mlat: `feed.adsb.fi` port `31090`|
| ADSB.lol | N | Run by a private individual located in the Netherlands | adsb:`in.adsb.lol` port `30004`<br/>mlat: `in.adsb.one` port `31090`|
| Planespotters | N | planespotters.net | adsb:`feed.planespotters.net` port `30004`<br/>mlat: `mlat.planespotters.net` port `31090`|
| The Air Traffic | N | Run by a private individual | adsb:`feed.theairtraffic.com` port `30004`<br/>mlat: `mlat.theairtraffic.com` port `31090`|
| AV Delphi | C | Swiss aircraft data company | adsb:`data.avdelphi.com` port `24999`<br/>mlat: no MLAT|
| ADSB Exchange | C | Large aggregator owned by JetNet | adsb:`feed1.adsbexchange.com` port `30004`<br/>mlat: `feed.adsbexchange.com` port `31090`|
| --------------- | ------------------------------ | --------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
| Airplanes.live | N | Run by volunteers that used to be related to adsbexchange | adsb:`feed.airplanes.live` port `30004`<br/>mlat: `feed.airplanes.live` port `31090` |
| ADSB.fi | N | Run by volunteers that used to be related to adsbexchange | adsb:`feed.adsb.fi` port `30004`<br/>mlat: `feed.adsb.fi` port `31090` |
| ADSB.lol | N | Run by a private individual located in the Netherlands | adsb:`in.adsb.lol` port `30004`<br/>mlat: `in.adsb.one` port `31090` |
| Planespotters | N | planespotters.net | adsb:`feed.planespotters.net` port `30004`<br/>mlat: `mlat.planespotters.net` port `31090` |
| The Air Traffic | N | Run by a private individual | adsb:`feed.theairtraffic.com` port `30004`<br/>mlat: `mlat.theairtraffic.com` port `31090` |
| AV Delphi | C | Swiss aircraft data company | adsb:`data.avdelphi.com` port `24999`<br/>mlat: no MLAT |
| ADSB Exchange | C | Large aggregator owned by JetNet | adsb:`feed1.adsbexchange.com` port `30004`<br/>mlat: `feed.adsbexchange.com` port `31090` |
| RadarPlane | N | Run by a few volunteers in Canada and Portugal | adsb: `feed.radarplane.com` port `30001`<br/>mlat: `feed.radarplane.com` port `31090` |
| Fly Italy ADSB | N | Run by a few ADSB enthusiasts in Italy | adsb: `dati.flyitalyadsb.com` port `4905`<br/>mlat: `dati.flyitalyadsb.com` port `30100` |
@ -351,15 +351,15 @@ NOTE: If you have a UAT dongle and use `dump978` to decode this, you should use
There are many optional parameters relating to the ingestion of data and the general networking functioning of the `readsb` program that implements this functionality.
| Variable | Description | Controls which `readsb` option | Default |
|----------|-------------|--------------------------------|---------|
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------- | ------------- |
| `READSB_NET_API_PORT` | <https://github.com/wiedehopf/readsb/blob/dev/README-json.md#--net-api-port-query-formats> | `--net-api-port=<ports>` | `30152` |
| `READSB_NET_BEAST_REDUCE_INTERVAL` | BeastReduce position update interval, longer means less data (valid range: `0.000` - `14.999`) | `--net-beast-reduce-interval=<seconds>` | `1.0` |
| `READSB_NET_BEAST_REDUCE_FILTER_DIST` | Restrict beast-reduce output to aircraft in a radius of X nmi | `--net-beast-reduce-filter-dist=<nmi>` | Unset |
| `READSB_NET_BEAST_REDUCE_FILTER_ALT` | Restrict beast-reduce output to aircraft below X ft | `--net-beast-reduce-filter-alt=<ft>` | Unset |
| `READSB_NET_BEAST_REDUCE_OUT_PORT` | TCP BeastReduce output listen ports (comma separated) | `--net-beast-reduce-out-port=<ports>` | Unset |
| `READSB_NET_BEAST_INPUT_PORT`| TCP Beast input listen ports | `--net-bi-port=<ports>` | `30004,30104` |
| `READSB_NET_BEAST_INPUT_PORT` | TCP Beast input listen ports | `--net-bi-port=<ports>` | `30004,30104` |
| `READSB_NET_BEAST_OUTPUT_PORT` | TCP Beast output listen ports | `--net-bo-port=<ports>` | `30005` |
| `READSB_NET_BUFFER` | TCP buffer size 64Kb * (2^n) | `--net-buffer=<n>` | `2` (256Kb) |
| `READSB_NET_BUFFER` | TCP buffer size 64Kb \* (2^n) | `--net-buffer=<n>` | `2` (256Kb) |
| `READSB_NET_RAW_OUTPUT_INTERVAL` | TCP output flush interval in seconds (maximum interval between two network writes of accumulated data). | `--net-ro-interval=<rate>` | `0.05` |
| `READSB_NET_RAW_OUTPUT_SIZE` | TCP output flush size (maximum amount of internally buffered data before writing to network). | `--net-ro-size=<size>` | `1200` |
| `READSB_NET_CONNECTOR_DELAY` | Outbound re-connection delay. | `--net-connector-delay=<seconds>` | `30` |
@ -374,7 +374,7 @@ There are many optional parameters relating to the ingestion of data and the gen
| `READSB_WRITE_STATE_ONLY_ON_EXIT` | if set to anything, it will only write the status range outlines, etc. upon termination of `readsb` | `--write-state-only-on-exit` | Unset |
| `READSB_JSON_INTERVAL` | Update interval for the webinterface in seconds / interval between aircraft.json writes | `--write-json-every=<sec>` | `1.0` |
| `READSB_JSON_TRACE_INTERVAL` | Per plane interval for json position output and trace interval for globe history | `--json-trace-interval=<sec>` | `15` |
| `READSB_FORWARD_MLAT_SBS` | If set to anthing, it will include MLAT results in the SBS/BaseStation output. This may be desirable if you feed SBS data to applications like [VRS](https://github.com/sdr-enthusiasts/docker-virtualradarserver) or [PlaneFence](https://github.com/kx1t/docker-planefence) | Unset |
| `READSB_FORWARD_MLAT_SBS` | If set to anything, it will include MLAT results in the SBS/BaseStation output. This may be desirable if you feed SBS data to applications like [VRS](https://github.com/sdr-enthusiasts/docker-virtualradarserver) or [PlaneFence](https://github.com/kx1t/docker-planefence) | Unset |
| `UUID` | Sets the UUID that is sent on the `beast_reduce_plus` port if no individual UUIDs have been defined with the `READSB_NET_CONNECTOR` parameter. Similarly, it's also used with `mlat-client` (see below) if no individual UUIDs have been set with the `MLAT_CONFIG` parameter. | | unset |
#### MLAT configuration
@ -396,7 +396,7 @@ It will create a separate instance of `mlat-client` for each defined MLAT server
where:
| Parameter | Mandatory/Optional | Description |
|-------------------|--------------------|-------------|
| ----------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mlat` | Mandatory | indicates that the line container MLAT-client configuration parameters |
| `mlat-server.com` | Mandatory | the domain name or ip address of the target MLAT server |
| `port` | Mandatory | the port (TCP or UDP) of the target MLAT server |
@ -425,8 +425,8 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
Generally, there is little to configure, but there are a few parameters that you can set or change:
| Variable | Description | Default if omitted|
|----------|-------------|--------------------------------|
| Variable | Description | Default if omitted |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| `MLATHUB_SBS_OUT_PORT` | TCP port where the consolidated MLAT results will be available in SBS (BaseStation) format | `31003` |
| `MLATHUB_BEAST_IN_PORT` | TCP port you where you can send additional MLAT results to, in Beast format | `31004` |
| `MLATHUB_BEAST_OUT_PORT` | TCP port where consolidated MLAT results will be available in Beast format | `31005` |
@ -447,7 +447,7 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
#### `tar1090` Core Configuration
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- |
| `READSB_JSON_INTERVAL` | Update data update interval for the webinterface in seconds | `1.0` |
| `UPDATE_TAR1090` | At startup update tar1090 and tar1090db to the latest versions | `true` |
| `INTERVAL` | Interval at which the track history is saved | `8` |
@ -469,7 +469,7 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
#### `tar1090` `config.js` Configuration - Title
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ---------------------------- | ---------------------------------------------------- | --------- |
| `TAR1090_PAGETITLE` | Set the tar1090 web page title | `tar1090` |
| `TAR1090_PLANECOUNTINTITLE` | Show number of aircraft in the page title | `false` |
| `TAR1090_MESSAGERATEINTITLE` | Show number of messages per second in the page title | `false` |
@ -477,13 +477,13 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
#### `tar1090` `config.js` Configuration - Output
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------- |
| `TAR1090_DISPLAYUNITS` | The DisplayUnits setting controls whether nautical (ft, NM, knots), metric (m, km, km/h) or imperial (ft, mi, mph) units are used in the plane table and in the detailed plane info. Valid values are "`nautical`", "`metric`", or "`imperial`". | `nautical` |
#### `tar1090` `config.js` Configuration - Map Settings
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- |
| `TAR1090_BINGMAPSAPIKEY` | Provide a Bing Maps API key to enable the Bing imagery layer. You can obtain a free key (with usage limits) at <https://www.bingmapsportal.com/> (you need a "basic key"). | `null` |
| `TAR1090_DEFAULTCENTERLAT` | Default center (latitude) of the map. This setting is overridden by any position information provided by dump1090/readsb. All positions are in decimal degrees. | `45.0` |
| `TAR1090_DEFAULTCENTERLON` | Default center (longitude) of the map. This setting is overridden by any position information provided by dump1090/readsb. All positions are in decimal degrees. | `9.0` |
@ -504,13 +504,13 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
| `TAR1090_MAPDIMPERCENTAGE` | The percentage amount of dimming used if the map is dimmed, `0`-`1` | `0.45` |
| `TAR1090_MAPCONTRASTPERCENTAGE` | The percentage amount of contrast used if the map is dimmed, `0`-`1` | `0` |
| `TAR1090_DWDLAYERS` | Various map layers provided by the DWD geoserver can be added here. [Preview and available layers](https://maps.dwd.de/geoserver/web/wicket/bookmarkable/org.geoserver.web.demo.MapPreviewPage?1&filter=false). Multiple layers are also possible. Syntax: `dwd:layer1,dwd:layer2,dwd:layer3` | `dwd:RX-Produkt` |
| `TAR1090_LABELZOOM` | Displays aircraft labels only until this zoom level, `1`-`15` (values >`15` don't really make sense)| |
| `TAR1090_LABELZOOM` | Displays aircraft labels only until this zoom level, `1`-`15` (values >`15` don't really make sense) | |
| `TAR1090_LABELZOOMGROUND` | Displays ground traffic labels only until this zoom level, `1`-`15` (values >`15` don't really make sense) | |
#### `tar1090` `config.js` Configuration - Range Rings
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- |
| `TAR1090_RANGERINGS` | `false` to hide range rings | `true` |
| `TAR1090_RANGERINGSDISTANCES` | Distances to display range rings, in miles, nautical miles, or km (depending settings value '`TAR1090_DISPLAYUNITS`'). Accepts a comma separated list of numbers (no spaces, no quotes). | `100,150,200,250` |
| `TAR1090_RANGERINGSCOLORS` | Colours for each of the range rings specified in `TAR1090_RANGERINGSDISTANCES`. Accepts a comma separated list of hex colour values, each enclosed in single quotes (eg `TAR1090_RANGERINGSCOLORS='#FFFFF','#00000'`). No spaces. | Unset |
@ -518,7 +518,7 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
#### `tar1090` `config.js` Configuration - Route Display
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| --------------------- | -------------------------------------------------- | ------------------------------------- |
| `TAR1090_USEROUTEAPI` | Set to `true` to enable route lookup for callsigns | Unset |
| `TAR1090_ROUTEAPIURL` | API URL used | `https://api.adsb.lol/api/0/routeset` |
@ -527,7 +527,7 @@ Note - due to design limitations of `readsb`, the `tar1090` graphical interface
#### `graphs1090` Environment Parameters
| Variable | Description | Default |
|----------|-------------|---------|
| -------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | --------- |
| `GRAPHS1090_DARKMODE` | If set to `true`, `graphs1090` will be rendered in "dark mode". | Unset |
| `GRAPHS1090_RRD_STEP` | Interval in seconds to feed data into RRD files. | `60` |
| `GRAPHS1090_SIZE` | Set graph size, possible values: `small`, `default`, `large`, `huge`, `custom`. | `default` |
@ -558,7 +558,7 @@ ADS-B over UAT data is transmitted in the 978 MHz band, and this is used in the
1. Set the following environment parameters:
```yaml
- URL_978=http://dump978/skyaware978
- URL_978=http://dump978/skyaware978
```
2. Install the [`docker-dump978` container](https://github.com/sdr-enthusiasts/docker-dump978). Note - only containers downloaded/deployed on/after Feb 8, 2023 will work.
@ -572,7 +572,7 @@ Users of AirSpy devices can enable extra `graphs1090` graphs by configuring the
- Set the following environment parameter:
```yaml
- ENABLE_AIRSPY=yes
- ENABLE_AIRSPY=yes
```
- To provide the container access to the AirSpy statistics, map a volume in your `docker-compose.yml` file as follows:
@ -634,7 +634,7 @@ Note - on some systems (DietPi comes to mind), `/sys/class/thermal/` may not be
#### Reducing Disk IO for Graphs1090
Note - *this feature is still somewhat experimental. If you are really attached to your statistics/graphs1090 data, please make sure to back up your mapped drives regularly*
Note - _this feature is still somewhat experimental. If you are really attached to your statistics/graphs1090 data, please make sure to back up your mapped drives regularly_
If you are using a Raspberry Pi or another type of computer with an SD card, you may already be aware that these SD cards have a limited number of write-cycles that will determine their lifespan. In other words - a common reason for SD card failure is excessive writes to it.
@ -647,17 +647,16 @@ Note -- there is a chance that the data isn't written back in time (due to power
The feature assumes that you have mapped `/var/lib/collectd` to a volume (to ensure data is persistent across container recreations), and `/run` as a `tmpfs` RAM disk, as shown below and also as per the [`docker-compose.yml` example](docker-compose.yml):
```yaml
volumes:
volumes:
- /opt/adsb/ultrafeeder/globe_history:/var/globe_history
...
tmpfs:
---
tmpfs:
- /run:exec,size=256M
...
```
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| `GRAPHS1090_REDUCE_IO=` | Optional Set to `true` to reduce the write cycles for `graphs1090`| Unset |
| --------------------------------- | ------------------------------------------------------------------------------------------- | ------- |
| `GRAPHS1090_REDUCE_IO=` | Optional Set to `true` to reduce the write cycles for `graphs1090` | Unset |
| `GRAPHS1090_REDUCE_IO_FLUSH_IVAL` | Interval (in secs) over which the `graphs1090` data is written back to non-volatile storage | `3600` |
### `timelapse1090` Configuration
@ -665,7 +664,7 @@ The feature assumes that you have mapped `/var/lib/collectd` to a volume (to ens
Legacy: **We recommend AGAINST enabling this feature** as it has been replaced with <http://dockerhost:port/?replay>. `timelapse1090` writes a lot of data to disk, which could shorten the lifespan of your Raspiberry Pi SD card. The replacement functionality is better and doesn't cause any additional disk writes.
| Environment Variable | Purpose | Default |
|----------------------|---------|---------|
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `ENABLE_TIMELAPSE1090` | Optional / Legacy. Set to `true` to enable timelapse1090. Once enabled, can be accessed via <http://dockerhost:port/timelapse/> | Unset |
| `TIMELAPSE1090_INTERVAL` | Snapshot interval in seconds | `10` |
| `TIMELAPSE1090_HISTORY` | Time saved in hours | `24` |
@ -687,9 +686,9 @@ You should now be able to browse to:
No paths need to be mapped through to persistent storage. However, if you don't want to lose your range outline and aircraft tracks/history and heatmap / replay data on container restart, you can optionally map these paths:
| Path | Purpose |
|------|---------|
| `/opt/adsb/ultrafeeder/globe_history:/var/globe_history` | Holds range outline data, heatmap / replay data and traces if enabled.
*Note: this data won't be automatically deleted, you will need to delete it eventually if you map this path.* |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `/opt/adsb/ultrafeeder/globe_history:/var/globe_history` | Holds range outline data, heatmap / replay data and traces if enabled. |
| _Note: this data won't be automatically deleted, you will need to delete it eventually if you map this path._ |
| `/opt/adsb/ultrafeeder/timelapse1090:/var/timelapse1090` | Holds timelapse1090 data if enabled. (We recommend against enabling this feature, see above) |
| `/opt/adsb/ultrafeeder/collectd:/var/lib/collectd` | Holds graphs1090 & performance data |
| `/proc/diskstats:/proc/diskstats:ro` | Makes disk statistics available to `graphs1090` |
@ -719,7 +718,7 @@ Please see the [separate instruction document](README-grafana.md) for step by st
In order for Telegraf to serve a [Prometheus](https://prometheus.io) endpoint, the following environment variables can be used:
| Variable | Description |
| ---- | ---- |
| ------------------- | ------------------------------------------------------------------------ |
| `PROMETHEUS_ENABLE` | Set to `true` for a Prometheus endpoint on `http://0.0.0.0:9273/metrics` |
### Output from Ultrafeeder to InfluxDBv2
@ -727,7 +726,7 @@ In order for Telegraf to serve a [Prometheus](https://prometheus.io) endpoint, t
In order for Telegraf to output metrics to an [InfluxDBv2](https://docs.influxdata.com/influxdb/) time-series database, the following environment variables can be used:
| Variable | Description |
| ---- | ---- |
| ------------------- | ----------------------------------- |
| `INFLUXDBV2_URL` | The URL of the InfluxDB instance |
| `INFLUXDBV2_TOKEN` | The token for authentication |
| `INFLUXDBV2_BUCKET` | Destination bucket to write into |
@ -750,7 +749,7 @@ docker exec -it ultrafeeder /usr/local/bin/viewadsb --cpr-focus 3D3ED0
## Minimalist setup
If you want to use `ultrafeeder` *only* as a SDR decoder but without any mapping or stats/graph websites, without MLAT connections or MLAT-hub, etc., for example to minimize CPU and RAM needs on a low CPU/memory single board computer, then do the following:
If you want to use `ultrafeeder` _only_ as a SDR decoder but without any mapping or stats/graph websites, without MLAT connections or MLAT-hub, etc., for example to minimize CPU and RAM needs on a low CPU/memory single board computer, then do the following:
- in the `ULTRAFEEDER_CONFIG` parameter, remove any entry that starts with `mlat` or `mlathub`. This will prevent any `mlat-client`s or `mlathub` instances to be launched. If you still want to connect the `mlat-client`(s) to external MLAT servers but you don't want to run the overhead of a MLATHUB, you can leave any entries starting with `mlat` in the `ULTRAFEEDER_CONFIG` parameter, and set `MLATHUB_DISABLE=true`
- Set the parameter `TAR1090_DISABLE=true`. This will prevent the `nginx` webserver and any websites to be launched and no `collectd` (graphs1090) or `rrd` (ADSB message history) data to be collected or retained.
@ -769,7 +768,7 @@ We also have a [Discord channel](https://discord.gg/sTf9uYF), feel free to [join
## Acknowledgements
- The [SDR-Enthusiasts team](https://github.com/sdr-enthusiasts) ([Mike Nye](https://github.com/mikenye), [Fred Clausen](https://github.com/fredclausen)) for all the foot and leg work done to create the base images on which the container is built
- [Wiedehopf](https://github.com/wiedehopf) for modifying, creating, maintaining, and adding features to many of the components of this container including [readsb](https://github.com/wiedehopf/readsb), [tar1090](https://github.com/wiedehopf/tar1090), [graphs1090](https://github.com/wiedehopf/graphs1090), [autogain](https://github.com/wiedehopf/adsb-scripts/wiki/Automatic-gain-optimization-for-readsb-and-dump1090-fa), and many more components, and for helping debug the container whenever the need arised
- [Wiedehopf](https://github.com/wiedehopf) for modifying, creating, maintaining, and adding features to many of the components of this container including [readsb](https://github.com/wiedehopf/readsb), [tar1090](https://github.com/wiedehopf/tar1090), [graphs1090](https://github.com/wiedehopf/graphs1090), [autogain](https://github.com/wiedehopf/adsb-scripts/wiki/Automatic-gain-optimization-for-readsb-and-dump1090-fa), and many more components, and for helping debug the container whenever the need arose
- [John Norrbin](https://github.com/Johnex) for his ideas, testing, feature requests, more testing, nagging, pushing, prodding, and overall efforts to make this a high quality container and for the USB "hotplug" configuration
- The community at the [SDR-Enthusiasts Discord Server](https://discord.gg/sTf9uYF) for helping out, testing, asking questions, and generally driving to make this a better product
- Of course the Open Source community at large, including [Salvatore Sanfilippo](https://github.com/antirez) and [Oliver Jowett](https://github.com/mutability) who wrote the excellent base code for `dump1090` from which much of this package is derived

View file

@ -7,10 +7,10 @@
[[ "$ARCHS" == "" ]] && ARCHS="linux/armhf,linux/arm64,linux/amd64"
BASETARGET1=ghcr.io/sdr-enthusiasts
BASETARGET2=kx1t
#BASETARGET2=kx1t
IMAGE1="$BASETARGET1/$(pwd | sed -n 's|.*/\(docker-.*\)|\1|p'):$TAG"
IMAGE2="$BASETARGET2/$(pwd | sed -n 's|.*/docker-\(.*\)|\1|p'):$TAG"
#IMAGE2="$BASETARGET2/$(pwd | sed -n 's|.*/docker-\(.*\)|\1|p'):$TAG"
echo "press enter to start building $IMAGE1 from $BRANCH"

View file

@ -9,7 +9,7 @@ services:
hostname: ultrafeeder
restart: unless-stopped
device_cgroup_rules:
- 'c 189:* rwm'
- "c 189:* rwm"
ports:
- 8080:80 # to expose the web interface
- 9273-9274:9273-9274 # to expose the statistics interface to Prometheus

View file

@ -328,4 +328,3 @@ elif [[ "${LOGLEVEL,,}" == "error" ]]; then
elif [[ "${LOGLEVEL,,}" == "none" ]]; then
exec "${s6wrap[@]}" --quiet --ignore-stdout --ignore-stderr --args "${READSB_BIN}" "${READSB_CMD[@]}" $READSB_EXTRA_ARGS
fi

View file

@ -1,3 +1,5 @@
#!/usr/bin/with-contenv bash
# shellcheck shell=bash disable=SC2015
#
# This scripts should be sourced by the /etc/services.d/xxx/run modules for
@ -30,7 +32,7 @@
# ULTRAFEEDER_CONFIG=mlathub,host,port,protocol[,uuid=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX][,extra-arguments]
#
# The ULTRAFEEDER_CONFIG parameter can have multiple config strings, separated by a `;`
# Please note that the config strings cannot containe `;` or `,` -- undefined things may happen if these characters are present.
# Please note that the config strings cannot contain `;` or `,` -- undefined things may happen if these characters are present.
#
# In the above configuration strings:
# `host` is an IP address. Specify an IP/hostname/containername for incoming or outgoing connections.