freezing-apple-3911
10/25/2025, 5:40 AMrerun-sdk via conda and uv (a package manager), and the later failed. Specifically:
1. Conda (success)
bash
➜ which python
/opt/homebrew/Caskroom/miniforge/base/bin/python
➜ which pip
/opt/homebrew/Caskroom/miniforge/base/bin/pip
➜ pip install rerun-sdk
...
➜ python -c "import rerun.blueprint"
# OK
2. UV (failed)
bash
➜ uva
[INFO] Activate Python venv: .venv (via .venv/bin/activate)
➜ which python
/private/tmp/rr-test/.venv/bin/python
➜ which pip
/private/tmp/rr-test/.venv/bin/pip
➜ uv add rerun-sdk
Resolved 8 packages in 361ms
Installed 6 packages in 25ms
+ attrs==25.4.0
+ numpy==2.2.6
+ pillow==12.0.0
+ pyarrow==22.0.0
+ rerun-sdk==0.26.1
+ typing-extensions==4.15.0
➜ uv add rerun
Resolved 9 packages in 285ms
Installed 1 package in 4ms
+ rerun==1.0.30
➜ python -c "import rerun" # rerun can be imported
➜ python -c "import rerun.blueprint" # rerun.blueprint failed
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'rerun.blueprint'
For both methods, the`rerun` cli and python library are successfully installed, but rerun.blueprint failed.
I'm wondering is there any limitation if rerun is installed via a python package manager?
Thanks.steep-beach-35312
10/20/2025, 9:48 PMpython
rrb.Horizontal(
rrb.TimeSeriesView(origin="/error1", visible=True), # <- this is ok
rrb.TimeSeriesView(origin="/error2", visible=False),
rrb.TimeSeriesView(origin="/error3", visible=True),
column_shares=[1, 1, 1],
visible=False, # <- this doesn't seem to be accepted
name="Metrics",
)
but got this error:
python
Horizontal.__init__() got an unexpected keyword argument 'visible'
I might have missed it in the docs — I noticed there’s an eye icon next to Horizontal and Vertical containers in the GUI, similar to individual content containers, that lets me toggle visibility, so I was wondering if there’s a programmatic equivalent.
Rerun version: 0.26.0
Python version: 3.13.7
Linux
Thanks!busy-bear-52582
10/16/2025, 6:50 AMgentle-night-55372
10/15/2025, 4:31 PMlittle-forest-57077
10/10/2025, 9:27 PMbig-window-52699
10/10/2025, 7:49 AMimportant-holiday-85916
10/09/2025, 12:48 AMbreezy-secretary-38820
10/07/2025, 4:42 PMrhythmic-oyster-56744
10/06/2025, 11:47 AMrr.dataframe.load_recording, recording.schema() doesn't list the columns related to VideoStream. Is it possible to extract a video from an .rrd file?
Thank you!wide-memory-11949
10/03/2025, 8:41 AMkind-airline-57079
09/27/2025, 1:13 AMadorable-airplane-36966
09/26/2025, 2:13 PMbright-london-54300
09/24/2025, 11:25 PMsilly-laptop-31985
09/19/2025, 10:21 PMfrom urdf_parser_py.urdf import URDF
robot = URDF.from_xml_file(self.__urdf_path)
for link in robot.links:
if link.visual is not None:
geom = link.visual.geometry
if geom.scale is not None:
geom.scale = [2.0, 2.0, 2.0]
rr.log_file_from_contents(
urdf_path,
robot.to_xml_string().encode("utf-8"),
static=True,
)
Expected behavior
The meshes have different sizes depending on the scale
OS: Ubuntu 22.04
Rerun version: 0.25.1
Additional context
Many STLs are in mm so they need scale=0.001 0.001 0.001 to be properly displayed as URDFsgorgeous-painting-39913
09/16/2025, 12:38 PMmysterious-jewelry-48173
09/13/2025, 3:13 PM"Turbo" colormap does it the other way around. In Matplotlib I could use "Spectral" and then "Spectral_r" to invert it but there doesn't seem to be a "Turbo_r" colormap in Rerun.
https://cdn.discordapp.com/attachments/1416441589835763902/1416441590456516709/Screenshot_2025-09-13_at_17.07.40.png?ex=68c6db87&is=68c58a07&hm=064edc1d7d4e781629b2bb916b6a1f0f64b6c895ff2684368310e434b4626247&
https://cdn.discordapp.com/attachments/1416441589835763902/1416441590985134091/Screenshot_2025-09-13_at_17.12.53.png?ex=68c6db87&is=68c58a07&hm=841421261ec4b03074883a72084d044f0376960346f8b0508ccf234ef7b4dcac&abundant-hospital-3590
09/08/2025, 6:56 AMprehistoric-farmer-43967
09/05/2025, 9:15 AMphase 2):
sh
#!/bin/zsh
echo ":: phase 1"
mkdir src
cd src
RERUN_URL=https://github.com/rerun-io/rerun/releases/download/0.24.1/rerun_cpp_sdk.zip
RERUN_DIR=rerun_cpp_sdk
wget -q -O ${RERUN_DIR}.zip ${RERUN_URL}
unzip -q -o ${RERUN_DIR}.zip -d .
mv ${RERUN_DIR}/* .
rm -rf ${RERUN_DIR} ${RERUN_DIR}.zip
echo ":: phase 2"
cd ..
mkdir build
cd build
cmake \
-D CMAKE_INSTALL_PREFIX=/usr/local/ \
-D CMAKE_BUILD_TYPE=Release \
-B . \
-S ../src
cmake --build . --config Release --target rerun_sdk
In docker the main issues are:
sh
fatal error: xsimd/xsimd.hpp: No such file or directory
25 | #include <xsimd/xsimd.hpp>
| ^~~~~~~~~~~~~~~~~
fatal error: mimalloc.h: No such file or directory
53 | # include <mimalloc.h>
| ^~~~~~~~~~~~
I tried to add paths, but arrow builds using external project add and that is why my flags are ignored.
2. I tried to add arrow-cpp using pixi in Docker, but pixi has only arrow-cpp < 14 from conda-forge repo. I suppose it is incompatible with rerun.
sh
> pixi init
Created /build/pixi.toml
> pixi add arrow-cpp==18.0.0
Error: x failed to solve requirements of environment 'default' for platform 'linux-64'
|-> x failed to solve the environment
|
`-> Cannot solve the request because of: No candidates were found for arrow-cpp ==18.0.0.
I would greatly appreciate any help/recommendations for that, because I am trying to accomplish this 3 days in a row.high-balloon-12758
09/04/2025, 9:13 AMambitious-yacht-68066
09/03/2025, 9:52 AM>cmake --build build
MSBuild version 17.8.5+b5265ef37 for .NET Framework
Checking File Globs
1>Performing update step for 'arrow_cpp'
-- Already at requested tag: apache-arrow-18.0.0
No patch step for 'arrow_cpp'
Performing configure step for 'arrow_cpp'
-- arrow_cpp configure command succeeded. See also C:/Users/emichel/SourceCode/SparkleOptimizer3/build/_deps/rerun_sdk-build/arrow/src/arrow_cpp-stamp/a
rrow_cpp-configure-*.log
Performing build step for 'arrow_cpp'
-- arrow_cpp build command succeeded. See also C:/Users/emichel/SourceCode/SparkleOptimizer3/build/_deps/rerun_sdk-build/arrow/src/arrow_cpp-stamp/arrow
_cpp-build-*.log
Performing install step for 'arrow_cpp'
-- arrow_cpp install command succeeded. See also C:/Users/emichel/SourceCode/SparkleOptimizer3/build/_deps/rerun_sdk-build/arrow/src/arrow_cpp-stamp/arr
ow_cpp-install-*.log
Completed 'arrow_cpp'
rerun_sdk.vcxproj -> C:\Users\emichel\SourceCode\SparkleOptimizer3\build\_deps\rerun_sdk-build\Debug\rerun_sdk.libbrave-belgium-52155
09/01/2025, 7:10 PMrerun::Points3D objects contain identical data to the original objects.
I'm finding this challenging, because the rerun::Points3D converts the provided Collection<rerun::components::Position3D> into an Arrow-encoded array, and never stores an intermediary representation. There don't seem to be accessor functions on the rerun::Points3D::positions member, which is a std::optional<ComponentBatch>. From what I can see, there isn't a way to convert from the Arrow-encoded array back to the original datatype in the C++ SDK.
Is there a way to extract the original data from the Arrow-encoded array?echoing-translator-29651
08/31/2025, 2:12 PMambitious-portugal-31008
08/27/2025, 5:33 PMadorable-ocean-38517
08/26/2025, 2:28 PMambitious-portugal-31008
08/22/2025, 7:32 PMserve_grpc_opts function.
I pass MemoryLimit::parse("2GB") (also tried with ::from_bytes and the byte count), but my rerun server process goes wild with memory usage. It basically continues to accumulate memory until the OS kills it or hangs (see screenshot it's already at around 25Gbyte). I run it in a Docker container on Ubuntu, if that matters.
The web-viewer recording itself hovers around ~2Gbyte, so that is good. But if the server runs OOM that doesn't help much.
Is there something I misunderstand or am missing about the configurable memory limits?
On code level, this is all I am doing:
let rec = RecordingStreamBuilder::new("test")
.recording_id("test-recording")
.serve_grpc_opts("0.0.0.0", 9876, MemoryLimit::parse("2GB")?)?;
loop {
std::thread::park();
}
https://cdn.discordapp.com/attachments/1408534249694302229/1408534249858007162/CleanShot_2025-08-22_at_12.25.53.png?ex=68aa173e&is=68a8c5be&hm=4d3f24a0470516d847807d07eab5868c7d21a2a8be971f681b61543a5d73c42b&
https://cdn.discordapp.com/attachments/1408534249694302229/1408534250340483142/CleanShot_2025-08-22_at_12.26.022x.png?ex=68aa173e&is=68a8c5be&hm=a266a0240ef24cea32cad8e490a29efd4226c8dd394920136df51301c65c7f83&delightful-pencil-64205
08/21/2025, 3:43 PMmysterious-beard-55128
08/21/2025, 10:45 AMkind-kitchen-2583
08/20/2025, 1:13 AMaverage-appointment-40329
08/15/2025, 7:24 PMLinkAxis.Independent works well, but LinkAxis.LinkGlobal doesn't log any data. Attached a screenshot for Independent vs Global link.
Is there anything wrong with the way I am logging the data?
Here's how I have the data logged:
For each entity:
- Fetch timestamps and values using fetch_data(entity_path, start_time, end_time)
- Send data to Rerun:
rr.send_columns(
entity_path,
indexes=[rr.TimeColumn("time", timestamp=timestamps)],
columns=rr.Scalars.columns(scalars=values),
)
Build a list of views:
- For each panel:
rrb.TimeSeriesView(
origin="/",
name=panel["name"],
contents=entity_paths,
axis_x=rrb.archetypes.TimeAxis(
link=rrb.components.LinkAxis.LinkGlobal
),
)
Arrange views into a grid layout:
grid = rrb.Grid(*views, grid_columns=1)
Send the blueprint to Rerun:
rr.send_blueprint(
rrb.Blueprint(
grid,
collapse_panels=True,
),
make_active=True,
)
https://cdn.discordapp.com/attachments/1405995563413143694/1405995564037968035/Screenshot_2025-08-15_at_3.22.02_PM.png?ex=68a0dae8&is=689f8968&hm=164bd9c87ac6d25e2db3d7de055eab4b591ce43b43f1429e9c3f6b466ba0ff8f&
https://cdn.discordapp.com/attachments/1405995563413143694/1405995564444946473/Screenshot_2025-08-15_at_3.24.03_PM.png?ex=68a0dae8&is=689f8968&hm=89724618fee1ed6db1659b575c4df51270ba77b34865be98b282ae792325e7ae&dry-crowd-37372
08/12/2025, 9:05 PM"""Demonstrates how to log data to a gRPC server and connect the web viewer to it."""
import time
import rerun as rr
rr.init("rerun_example_serve_web_viewer")
# Start a gRPC server and use it as log sink.
server_uri = rr.serve_grpc()
# Connect the web viewer to the gRPC server and open it in the browser
rr.serve_web_viewer(connect_to=server_uri)
# Log some data to the gRPC server.
rr.log("data", rr.Boxes3D(half_sizes=[2.0, 2.0, 1.0]))
# Keep server running. If we cancel it too early, data may never arrive in the browser.
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print("\nShutting down server...")
However no data shows up in the web viewer and it gives me the error "Data source rerun+http://127.0.0.1:9876/proxy has left unexpectedly: gRPC error, message: "js api error: TypeError: NetworkError when attempting to fetch resource.""