How to aggregate a set, not just a single field from LightDB Stream?

Description

I’m looking to aggregate multiple sensor values from the same sensor with a single LightDB Stream query.

As an example, see the sfc.wave-object below (these are pushed to LightDB Stream every 10-mins ish):

{ 
  "sfc": {
    "wave": {
      "Hmax": 2.111,
      "Hs": 1.410,
      "Pdom": 5.5,
      "dir": 220,
      "meanDir": 205
    }
  }
}

Any tips? Can we use the downsampling functionality that is common in time-series databases somehow?

Expected Behavior

I would like to get this data aggregated per property or simply downsampled for faster overview of larger time scales. At the moment, I’m requesting the entire sfc.wave-path and mapping each value to a data-series of the graph on success. This means a single query will get me all the data I’m interested in. However, this approach fails when the time scale grows to weeks / months, since the data sets grow large and will take a while to both retrieve and render.

Actual Behavior

As far as I can read the docs the only way to aggregate is on a single value, specifying the type as float for these values. If I want to do this for the wave sensor above, that means 5 requests, and we have sensors with up to 25-30 different values, which makes this seems like it’s not an optimal approach.

I naively thought the timebucket-setting without an aggregation-field could get me this, but instead I get all 6 datapoints for that hour, with the timestamp set to 11:00:00 for all values of that hour.

Adding a separate path for Hs with aggregation simply adds that value to the dataset for each of the packets (aka 6 packets for every hour in this case):

{ 
  "sfc.wave": {
    "Hmax": 2.111,
    "Hs": 1.410,
    "Pdom": 5.5,
    "dir": 220,
    "meanDir": 205
  },
  "sfc.wave.Hs": 1.410,
  "timestamp": "2026-04-21T11:00:00+00:00"
}

Environment

LightDB Stream
HTTP API

Logs and Console Output

I just now realized that this may be somewhat fixed if I add each field to the query with a separate aggregation, instead of querying the full object. This means some upgrading of my backend, but it seems to handle aggregation correctly.

So going from this first query to the second:

{
  "fields": [
    {"path": "timestamp","type": ""},
    {"path": "sfc.wave", "type":""},
  ],
  "filters": [{"path": "sfc.wave", "op": "<>", "value": null}]
} 

{
  "fields": [
    {"path": "timestamp","type": "" },
    {"path": "sfc.wave.Hs", "agg": "avg", "type": "float"}, 
    {"path": "sfc.wave.Hmax", "agg": "avg", "type": "float"},
    {"path": "sfc.wave.domP", "agg": "avg", "type": "float"}, 
    {"path": "sfc.wave.dir", "agg": "avg", "type": "float"},
    {"path": "sfc.wave.meanDir", "agg": "avg", "type": "float"}
  ],
  "timeBucket":"1h",
  "filters": [{"path": "sfc.wave","op": "<>", "value": null}]
}

I would still love to be able to get all the values for the period from the packet containing for example the largest Hs of the timebucket though - but that’s not a requirement for now :slight_smile: