"Add Golioth to Zephyr Project" console updates

Description

I have done the Zephyr Training “Add Golioth to Zephyr Project” section. I am not seeing any updates happen in the console, however. I see the program is running on the board because it gives me the statements such as “This is the main loop: 559” Is there any way to debug in the console further to see if perhaps data is getting misformatted somehow, etc.? For example could it just show me raw data without going through the pipeline?
I should note that the only things that the console shows that have updated are the Session Established and Last Report fields which update to when I turn on the board.

Steps to Reproduce

Running the Zephyr Training “Add Golioth to Zephyr Project” section.

Expected Behavior

Updates should occur within the Golioth console in addition to the “This is the main loop: 559” statements coming locally from the board.

Actual Behavior

Only the “This is the main loop: 559” statements coming locally from the board. There are no indications in the Golioth console that data is being received other than an indication of time of when the connection was established.

Impact

Golioth data flow not occurring.

Environment

Using nRF7002-DK board. It appears I’m using version 4.1 of the Zephyr Project.

Logs and Console Output

As mentioned, I get messages like this from the board, but nothing from the Golioth console:
“This is the main loop: 559”

Attempts to Resolve

I’m not sure where to go from here. I’m still new to Zephyr and Golioth.

Hey @john.blessing,

Great to hear you’ve been using the Zephyr training page!

Would you be open to setting up the training repository locally, as described in the Local Setup section?

After setting up the repository, I recommend building the 01_IOT application, which would also serve as a clean, reproducible baseline that makes it much easier to isolate and debug any issues you are potentially encountering.

Let me know if you need help getting started!

After debugging through some problems along the way, I was able to get the training repository set up locally and I was able to build and flash 01_IOT. A serial terminal shows outputs such as

[00:05:19.573,913] golioth_iot: Hello Golioth! 32
[00:05:19.573,974] golioth_iot: Streaming Temperature to Golioth: 27.700000
[00:05:19.622,772] golioth_iot: temperature_push_handler: Temperature successfully pushed

The Golioth console shows updates in the LightDB Stream as expected.

All of this matches well to what I did with Codespaces except that it is local.

What should I do towards making the Zephyr Training “Add Golioth to Zephyr Project” section work properly?

Thanks,
John Blessing

I’ve gone ahead and started trying to do the steps of the https://training.golioth.io/zephyr-training/golioth/west_manifest page. In the How to Add Golioth to an Existing West Manifest section it indicates some code should be added to the west.yml file. The code is code for the golioth repository. However, there’s already a section of west.yml which is for the golioth repo. Should this be skipped? The text to be added is not identical to what is already there, but certainly references the same repo url.

Hey @john.blessing,

Great! It’s always helpful to have a working example to compare against a non-working one.

If you’re working within your local zephyr-training repository, there’s no need to modify the manifest file, it already pulls in the correct NCS and Golioth Firmware SDK versions.

To add Golioth to and start working with the 05_golioth application, just follow the Use Golioth LightDB Stream to Send Data section of the training guide. If you encounter any issues, try replacing the printk statements with LOG_INF ones as shown in the Additional Execrices section and in the 01_IOT application to confirm you indeed have a connection to Golioth Cloud.

Also, the next live Zephyr training is coming up on July 25th. I’d recommend signing up as it’s a great opportunity to dive deeper into Zephyr and Golioth.

With the LOG_INF changes and having CONFIG_LOG_BACKEND_GOLIOTH=y I was able to get the Log statements to show up in the console, but I still do not see any LightDB Stream data coming through.

Here’s the code in the while loop which would seem to be the code which ought to be filling in that stream.

         char sbuf[32];
  snprintk(sbuf, strlen(sbuf), "{\"upcount\":%d}", counter);

  golioth_stream_set_async(client,
  			 "sensor",
  			 GOLIOTH_CONTENT_TYPE_JSON,
  			 sbuf,
  			 strlen(sbuf),
  			 NULL,
  			 NULL);

Do you have any thoughts about why that stream is not updating?

Thanks,
John

The 01_IOT application streams data using the CBOR encoding format. However, in your code snippet, you’re using JSON encoding instead. Since you’re able to see the CBOR-encoded payloads appearing in the console but not the JSON ones, it’s likely that a JSON-specific Pipeline hasn’t been set up.

To resolve this, please create a new Legacy LightDB Stream JSON Pipeline, as described in the documentation. This will ensure that your JSON-encoded data is correctly routed and visible in LightDB Stream.

Indeed a JSON specific Pipeline had not been set up. I have now created one with the below code based on that documentation page:

filter:
path: “*”
content_type: application/json
steps:

  • name: step-0
    destination:
    type: batch
    version: v1
  • name: step-1
    transformer:
    type: extract-timestamp
    version: v1
  • name: step-2
    transformer:
    type: inject-path
    version: v1
    destination:
    type: lightdb-stream
    version: v1

However, new data is not appearing on the LightDB stream page of the console.

Any ideas?

Thanks,
John

No, actually I had missed I had added this pipeline based on the instructions already and it was seemingly failing to do anything:

filter:
path: “*”
content_type: application/json
steps:

  • name: step0
    destination:
    type: lightdb-stream
    version: v1

Hey @john.blessing,

This one actually caught me by surprise as well—there’s a subtle issue in this line of code:

snprintk(sbuf, strlen(sbuf), "{\"upcount\":%d}", counter);

The problem is that you’re using strlen(sbuf) as the size argument, but sbuf is just declared and hasn’t been initialized yet. That means it contains garbage data, and calling strlen() on it results in undefined behavior—there’s no guarantee there’s a null terminator anywhere in the buffer.

The correct version should be:

snprintk(sbuf, sizeof(sbuf), "{\"upcount\":%d}", counter);

This ensures the function knows the actual size of the buffer and avoids reading from uninitialized memory.

This issue originated in the training site code and will be fixed shortly. It’s also a good reminder that if your JSON buffer is too small, the Pipeline may silently fail because the payload is no longer valid JSON. That’s why it’s helpful to LOG the buffer contents during development to confirm what’s actually being sent.

Both of your Pipelines look good, and if you keep them both enabled, you’ll see two separate entries in LightDB Stream, since they’re configured differently and running independently.

Yes, that correction makes sense to me. It worked! I appreciate the help.

Thanks,
John

1 Like