Mar 10 01:45:52.014297 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:45:52.014330 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:45:52.014343 kernel: BIOS-provided physical RAM map: Mar 10 01:45:52.014358 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:45:52.014367 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:45:52.014376 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:45:52.014387 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 10 01:45:52.014396 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 10 01:45:52.014406 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:45:52.014416 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:45:52.014426 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:45:52.014435 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:45:52.014449 kernel: NX (Execute Disable) protection: active Mar 10 01:45:52.014459 kernel: APIC: Static calls initialized Mar 10 01:45:52.014477 kernel: SMBIOS 2.8 present. Mar 10 01:45:52.014487 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 10 01:45:52.014498 kernel: Hypervisor detected: KVM Mar 10 01:45:52.014513 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:45:52.014542 kernel: kvm-clock: using sched offset of 4440659464 cycles Mar 10 01:45:52.014554 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:45:52.014565 kernel: tsc: Detected 2799.998 MHz processor Mar 10 01:45:52.014576 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:45:52.014587 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:45:52.014597 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 10 01:45:52.014609 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:45:52.014619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:45:52.014636 kernel: Using GB pages for direct mapping Mar 10 01:45:52.014646 kernel: ACPI: Early table checksum verification disabled Mar 10 01:45:52.014657 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 10 01:45:52.014667 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014678 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014688 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014709 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 10 01:45:52.014720 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014735 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014750 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014761 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:45:52.014772 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 10 01:45:52.014783 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 10 01:45:52.014794 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 10 01:45:52.014810 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 10 01:45:52.014822 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 10 01:45:52.014837 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 10 01:45:52.014849 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 10 01:45:52.014860 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 10 01:45:52.014870 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 10 01:45:52.014881 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 10 01:45:52.014892 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 10 01:45:52.014903 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 10 01:45:52.014914 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 10 01:45:52.014929 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 10 01:45:52.014941 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 10 01:45:52.014951 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 10 01:45:52.014962 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 10 01:45:52.014973 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 10 01:45:52.014984 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 10 01:45:52.014994 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 10 01:45:52.015005 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 10 01:45:52.015016 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 10 01:45:52.015031 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 10 01:45:52.015042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 10 01:45:52.015053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 10 01:45:52.015064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 10 01:45:52.015076 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 10 01:45:52.015087 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 10 01:45:52.015098 kernel: Zone ranges: Mar 10 01:45:52.015109 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:45:52.015120 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 10 01:45:52.015135 kernel: Normal empty Mar 10 01:45:52.015147 kernel: Movable zone start for each node Mar 10 01:45:52.015158 kernel: Early memory node ranges Mar 10 01:45:52.015169 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:45:52.015180 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 10 01:45:52.015191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 10 01:45:52.015202 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:45:52.015212 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:45:52.015223 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 10 01:45:52.015234 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:45:52.015250 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:45:52.015261 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:45:52.015272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:45:52.015283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:45:52.015294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:45:52.015305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:45:52.015316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:45:52.015327 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:45:52.015338 kernel: TSC deadline timer available Mar 10 01:45:52.015354 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 10 01:45:52.015365 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:45:52.015376 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:45:52.015387 kernel: Booting paravirtualized kernel on KVM Mar 10 01:45:52.015398 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:45:52.015409 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 10 01:45:52.015420 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 10 01:45:52.015431 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 10 01:45:52.015442 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 10 01:45:52.015458 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:45:52.015469 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:45:52.015481 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:45:52.015493 kernel: random: crng init done Mar 10 01:45:52.015504 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:45:52.015515 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 10 01:45:52.016559 kernel: Fallback order for Node 0: 0 Mar 10 01:45:52.016574 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 10 01:45:52.016593 kernel: Policy zone: DMA32 Mar 10 01:45:52.016604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:45:52.016615 kernel: software IO TLB: area num 16. Mar 10 01:45:52.016627 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 194764K reserved, 0K cma-reserved) Mar 10 01:45:52.016638 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 10 01:45:52.016649 kernel: Kernel/User page tables isolation: enabled Mar 10 01:45:52.016660 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:45:52.016671 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:45:52.016682 kernel: Dynamic Preempt: voluntary Mar 10 01:45:52.016709 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:45:52.016722 kernel: rcu: RCU event tracing is enabled. Mar 10 01:45:52.016734 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 10 01:45:52.016745 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:45:52.016756 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:45:52.016779 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:45:52.016796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:45:52.016807 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 10 01:45:52.016819 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 10 01:45:52.016831 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:45:52.016843 kernel: Console: colour VGA+ 80x25 Mar 10 01:45:52.016854 kernel: printk: console [tty0] enabled Mar 10 01:45:52.016870 kernel: printk: console [ttyS0] enabled Mar 10 01:45:52.016882 kernel: ACPI: Core revision 20230628 Mar 10 01:45:52.016894 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:45:52.016905 kernel: x2apic enabled Mar 10 01:45:52.016917 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:45:52.016933 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 10 01:45:52.016945 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Mar 10 01:45:52.016957 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:45:52.016969 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 10 01:45:52.016980 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 10 01:45:52.016992 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:45:52.017003 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:45:52.017015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:45:52.017027 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 10 01:45:52.017038 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 10 01:45:52.017054 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 10 01:45:52.017066 kernel: MDS: Mitigation: Clear CPU buffers Mar 10 01:45:52.017078 kernel: MMIO Stale Data: Unknown: No mitigations Mar 10 01:45:52.017089 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 10 01:45:52.017100 kernel: active return thunk: its_return_thunk Mar 10 01:45:52.017112 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 10 01:45:52.017124 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:45:52.017135 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:45:52.017147 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:45:52.017158 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:45:52.017170 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 10 01:45:52.017186 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:45:52.017198 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:45:52.017209 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:45:52.017221 kernel: landlock: Up and running. Mar 10 01:45:52.017232 kernel: SELinux: Initializing. Mar 10 01:45:52.017243 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 10 01:45:52.017255 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 10 01:45:52.017267 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 10 01:45:52.017278 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 10 01:45:52.017290 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 10 01:45:52.017302 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 10 01:45:52.017319 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 10 01:45:52.017330 kernel: signal: max sigframe size: 1776 Mar 10 01:45:52.017342 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:45:52.017354 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:45:52.017366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:45:52.017377 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:45:52.017389 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:45:52.017400 kernel: .... node #0, CPUs: #1 Mar 10 01:45:52.017412 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 10 01:45:52.017428 kernel: smp: Brought up 1 node, 2 CPUs Mar 10 01:45:52.017440 kernel: smpboot: Max logical packages: 16 Mar 10 01:45:52.017451 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Mar 10 01:45:52.017463 kernel: devtmpfs: initialized Mar 10 01:45:52.017475 kernel: x86/mm: Memory block size: 128MB Mar 10 01:45:52.017486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:45:52.017498 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 10 01:45:52.017510 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:45:52.017534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:45:52.017553 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:45:52.017565 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:45:52.017577 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:45:52.017588 kernel: audit: type=2000 audit(1773107150.370:1): state=initialized audit_enabled=0 res=1 Mar 10 01:45:52.017600 kernel: cpuidle: using governor menu Mar 10 01:45:52.017612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:45:52.017623 kernel: dca service started, version 1.12.1 Mar 10 01:45:52.017635 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:45:52.017646 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:45:52.017663 kernel: PCI: Using configuration type 1 for base access Mar 10 01:45:52.017675 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:45:52.017686 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:45:52.017707 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:45:52.017719 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:45:52.017731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:45:52.017743 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:45:52.017755 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:45:52.017772 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:45:52.017784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:45:52.017795 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:45:52.017807 kernel: ACPI: Interpreter enabled Mar 10 01:45:52.017819 kernel: ACPI: PM: (supports S0 S5) Mar 10 01:45:52.017830 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:45:52.017842 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:45:52.017853 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:45:52.017865 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:45:52.017877 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:45:52.018139 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:45:52.018319 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 10 01:45:52.018485 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 10 01:45:52.018503 kernel: PCI host bridge to bus 0000:00 Mar 10 01:45:52.020910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:45:52.021069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:45:52.021229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:45:52.021385 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 10 01:45:52.022569 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:45:52.022744 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 10 01:45:52.022895 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:45:52.023090 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:45:52.023292 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 10 01:45:52.023471 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 10 01:45:52.025133 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 10 01:45:52.025309 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 10 01:45:52.025477 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:45:52.025773 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.025943 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 10 01:45:52.026135 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.026301 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 10 01:45:52.026487 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.028130 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 10 01:45:52.028327 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.028498 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 10 01:45:52.028730 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.028898 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 10 01:45:52.029076 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.029241 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 10 01:45:52.029416 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.029608 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 10 01:45:52.029818 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 10 01:45:52.029987 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 10 01:45:52.030172 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:45:52.030390 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:45:52.030594 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 10 01:45:52.030785 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 10 01:45:52.030949 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 10 01:45:52.031129 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:45:52.031300 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:45:52.031472 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 10 01:45:52.033722 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 10 01:45:52.033906 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:45:52.034073 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:45:52.034255 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:45:52.034432 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 10 01:45:52.035724 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 10 01:45:52.035908 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:45:52.036076 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:45:52.036262 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 10 01:45:52.036433 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 10 01:45:52.037650 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 10 01:45:52.037836 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 10 01:45:52.038002 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 10 01:45:52.038228 kernel: pci_bus 0000:02: extended config space not accessible Mar 10 01:45:52.039703 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 10 01:45:52.039916 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 10 01:45:52.040138 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 10 01:45:52.040322 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 10 01:45:52.040539 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 10 01:45:52.040748 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 10 01:45:52.040924 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 10 01:45:52.041106 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 10 01:45:52.041331 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 10 01:45:52.043611 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 10 01:45:52.043860 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 10 01:45:52.044061 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 10 01:45:52.044240 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 10 01:45:52.044412 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 10 01:45:52.045437 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 10 01:45:52.045771 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 10 01:45:52.045942 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 10 01:45:52.046121 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 10 01:45:52.046288 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 10 01:45:52.046453 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 10 01:45:52.047683 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 10 01:45:52.047868 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 10 01:45:52.048036 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 10 01:45:52.048206 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 10 01:45:52.048375 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 10 01:45:52.048562 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 10 01:45:52.048745 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 10 01:45:52.048911 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 10 01:45:52.049079 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 10 01:45:52.049098 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:45:52.049111 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:45:52.049123 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:45:52.049135 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:45:52.049147 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:45:52.049166 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:45:52.049178 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:45:52.049190 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:45:52.049202 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:45:52.049214 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:45:52.049226 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:45:52.049237 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:45:52.049249 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:45:52.049261 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:45:52.049278 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:45:52.049289 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:45:52.049301 kernel: iommu: Default domain type: Translated Mar 10 01:45:52.049313 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:45:52.049325 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:45:52.049337 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:45:52.049349 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:45:52.049361 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 10 01:45:52.051547 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:45:52.051772 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:45:52.051946 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:45:52.051966 kernel: vgaarb: loaded Mar 10 01:45:52.051980 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:45:52.051992 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:45:52.052004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:45:52.052016 kernel: pnp: PnP ACPI init Mar 10 01:45:52.052191 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:45:52.052219 kernel: pnp: PnP ACPI: found 5 devices Mar 10 01:45:52.052231 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:45:52.052243 kernel: NET: Registered PF_INET protocol family Mar 10 01:45:52.052256 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:45:52.052268 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 10 01:45:52.052280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:45:52.052292 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 10 01:45:52.052304 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 10 01:45:52.052316 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 10 01:45:52.052333 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 10 01:45:52.052345 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 10 01:45:52.052357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:45:52.052369 kernel: NET: Registered PF_XDP protocol family Mar 10 01:45:52.053571 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 10 01:45:52.053768 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 10 01:45:52.053940 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 10 01:45:52.054118 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 10 01:45:52.054290 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 10 01:45:52.054457 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 10 01:45:52.055668 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 10 01:45:52.055859 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 10 01:45:52.056028 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 10 01:45:52.056203 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 10 01:45:52.056369 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 10 01:45:52.056564 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 10 01:45:52.056756 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 10 01:45:52.056922 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 10 01:45:52.057085 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 10 01:45:52.057249 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 10 01:45:52.057454 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 10 01:45:52.059730 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 10 01:45:52.059910 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 10 01:45:52.060079 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 10 01:45:52.060252 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 10 01:45:52.060418 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 10 01:45:52.062665 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 10 01:45:52.062858 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 10 01:45:52.063028 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 10 01:45:52.063196 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 10 01:45:52.063393 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 10 01:45:52.063633 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 10 01:45:52.063825 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 10 01:45:52.063999 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 10 01:45:52.064164 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 10 01:45:52.064329 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 10 01:45:52.064502 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 10 01:45:52.064701 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 10 01:45:52.064871 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 10 01:45:52.065036 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 10 01:45:52.065200 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 10 01:45:52.065374 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 10 01:45:52.071479 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 10 01:45:52.071736 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 10 01:45:52.071911 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 10 01:45:52.072093 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 10 01:45:52.072314 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 10 01:45:52.072486 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 10 01:45:52.072677 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 10 01:45:52.072865 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 10 01:45:52.073042 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 10 01:45:52.073208 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 10 01:45:52.073376 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 10 01:45:52.073601 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 10 01:45:52.073774 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:45:52.073926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:45:52.074077 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:45:52.074227 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 10 01:45:52.074386 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:45:52.074585 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 10 01:45:52.074794 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 10 01:45:52.074966 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 10 01:45:52.075136 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 10 01:45:52.075319 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 10 01:45:52.076539 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 10 01:45:52.076744 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 10 01:45:52.076911 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 10 01:45:52.077089 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 10 01:45:52.077245 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 10 01:45:52.077400 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 10 01:45:52.080507 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 10 01:45:52.080717 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 10 01:45:52.080887 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 10 01:45:52.081065 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 10 01:45:52.081225 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 10 01:45:52.081383 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 10 01:45:52.082628 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 10 01:45:52.082818 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 10 01:45:52.082988 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 10 01:45:52.083165 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 10 01:45:52.083324 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 10 01:45:52.083482 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 10 01:45:52.083678 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 10 01:45:52.083855 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 10 01:45:52.084015 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 10 01:45:52.084042 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:45:52.084055 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:45:52.084074 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 10 01:45:52.084087 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 10 01:45:52.084100 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 10 01:45:52.084113 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 10 01:45:52.084132 kernel: Initialise system trusted keyrings Mar 10 01:45:52.084145 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 10 01:45:52.084157 kernel: Key type asymmetric registered Mar 10 01:45:52.084175 kernel: Asymmetric key parser 'x509' registered Mar 10 01:45:52.084196 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:45:52.084209 kernel: io scheduler mq-deadline registered Mar 10 01:45:52.084221 kernel: io scheduler kyber registered Mar 10 01:45:52.084234 kernel: io scheduler bfq registered Mar 10 01:45:52.084409 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 10 01:45:52.087629 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 10 01:45:52.087819 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.088002 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 10 01:45:52.088181 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 10 01:45:52.088348 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.088518 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 10 01:45:52.088760 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 10 01:45:52.088930 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.089109 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 10 01:45:52.089275 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 10 01:45:52.089440 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.091657 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 10 01:45:52.091843 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 10 01:45:52.092012 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.092190 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 10 01:45:52.092356 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 10 01:45:52.092551 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.092736 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 10 01:45:52.092940 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 10 01:45:52.093106 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.093285 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 10 01:45:52.093451 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 10 01:45:52.095677 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 10 01:45:52.095713 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:45:52.095728 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:45:52.095741 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:45:52.095753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:45:52.095774 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:45:52.095787 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:45:52.095800 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:45:52.095813 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:45:52.095826 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:45:52.096010 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 10 01:45:52.096172 kernel: rtc_cmos 00:03: registered as rtc0 Mar 10 01:45:52.096329 kernel: rtc_cmos 00:03: setting system clock to 2026-03-10T01:45:51 UTC (1773107151) Mar 10 01:45:52.096492 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 10 01:45:52.096512 kernel: intel_pstate: CPU model not supported Mar 10 01:45:52.096543 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:45:52.096557 kernel: Segment Routing with IPv6 Mar 10 01:45:52.096570 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:45:52.096582 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:45:52.096594 kernel: Key type dns_resolver registered Mar 10 01:45:52.096607 kernel: IPI shorthand broadcast: enabled Mar 10 01:45:52.096620 kernel: sched_clock: Marking stable (1267004338, 222759134)->(1613377609, -123614137) Mar 10 01:45:52.096640 kernel: registered taskstats version 1 Mar 10 01:45:52.096652 kernel: Loading compiled-in X.509 certificates Mar 10 01:45:52.096669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:45:52.096682 kernel: Key type .fscrypt registered Mar 10 01:45:52.096703 kernel: Key type fscrypt-provisioning registered Mar 10 01:45:52.096717 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:45:52.096729 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:45:52.096742 kernel: ima: No architecture policies found Mar 10 01:45:52.096754 kernel: clk: Disabling unused clocks Mar 10 01:45:52.096773 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:45:52.096785 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:45:52.096798 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:45:52.096810 kernel: Run /init as init process Mar 10 01:45:52.096823 kernel: with arguments: Mar 10 01:45:52.096835 kernel: /init Mar 10 01:45:52.096847 kernel: with environment: Mar 10 01:45:52.096860 kernel: HOME=/ Mar 10 01:45:52.096871 kernel: TERM=linux Mar 10 01:45:52.096887 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:45:52.096907 systemd[1]: Detected virtualization kvm. Mar 10 01:45:52.096921 systemd[1]: Detected architecture x86-64. Mar 10 01:45:52.096934 systemd[1]: Running in initrd. Mar 10 01:45:52.096947 systemd[1]: No hostname configured, using default hostname. Mar 10 01:45:52.096960 systemd[1]: Hostname set to . Mar 10 01:45:52.096973 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:45:52.096991 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:45:52.097006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:45:52.097019 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:45:52.097033 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:45:52.097047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:45:52.097060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:45:52.097074 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:45:52.097089 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:45:52.097107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:45:52.097121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:45:52.097134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:45:52.097148 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:45:52.097161 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:45:52.097174 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:45:52.097187 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:45:52.097200 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:45:52.097223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:45:52.097237 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:45:52.097250 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:45:52.097271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:45:52.097285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:45:52.097298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:45:52.097311 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:45:52.097325 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:45:52.097351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:45:52.097364 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:45:52.097378 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:45:52.097391 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:45:52.097405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:45:52.097418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:45:52.097492 systemd-journald[203]: Collecting audit messages is disabled. Mar 10 01:45:52.097907 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:45:52.097931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:45:52.097946 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:45:52.097973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:45:52.097989 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:45:52.098009 systemd-journald[203]: Journal started Mar 10 01:45:52.098034 systemd-journald[203]: Runtime Journal (/run/log/journal/c1c0b790e12645b09e990b3be40733ba) is 4.7M, max 38.0M, 33.2M free. Mar 10 01:45:52.041727 systemd-modules-load[204]: Inserted module 'overlay' Mar 10 01:45:52.108409 kernel: Bridge firewalling registered Mar 10 01:45:52.104742 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 10 01:45:52.115547 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:45:52.115073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:45:52.115996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:45:52.120113 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:45:52.134800 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:45:52.143725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:45:52.145201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:45:52.159766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:45:52.164024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:45:52.173746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:45:52.177824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:45:52.186190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:45:52.187374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:45:52.196785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:45:52.200452 dracut-cmdline[233]: dracut-dracut-053 Mar 10 01:45:52.203482 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:45:52.241963 systemd-resolved[241]: Positive Trust Anchors: Mar 10 01:45:52.243044 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:45:52.243090 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:45:52.250949 systemd-resolved[241]: Defaulting to hostname 'linux'. Mar 10 01:45:52.253176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:45:52.254967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:45:52.306670 kernel: SCSI subsystem initialized Mar 10 01:45:52.317630 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:45:52.330808 kernel: iscsi: registered transport (tcp) Mar 10 01:45:52.356267 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:45:52.356364 kernel: QLogic iSCSI HBA Driver Mar 10 01:45:52.413180 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:45:52.419786 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:45:52.453165 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:45:52.453253 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:45:52.453273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:45:52.503572 kernel: raid6: sse2x4 gen() 14145 MB/s Mar 10 01:45:52.520571 kernel: raid6: sse2x2 gen() 9306 MB/s Mar 10 01:45:52.539083 kernel: raid6: sse2x1 gen() 9594 MB/s Mar 10 01:45:52.539152 kernel: raid6: using algorithm sse2x4 gen() 14145 MB/s Mar 10 01:45:52.558183 kernel: raid6: .... xor() 7947 MB/s, rmw enabled Mar 10 01:45:52.558256 kernel: raid6: using ssse3x2 recovery algorithm Mar 10 01:45:52.583548 kernel: xor: automatically using best checksumming function avx Mar 10 01:45:52.774592 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:45:52.790898 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:45:52.800795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:45:52.832327 systemd-udevd[421]: Using default interface naming scheme 'v255'. Mar 10 01:45:52.840880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:45:52.849232 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:45:52.869886 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 10 01:45:52.909247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:45:52.915801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:45:53.036897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:45:53.044788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:45:53.076199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:45:53.079838 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:45:53.082725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:45:53.084890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:45:53.090815 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:45:53.115803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:45:53.156618 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 10 01:45:53.173546 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:45:53.181647 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 10 01:45:53.205431 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:45:53.205506 kernel: GPT:17805311 != 125829119 Mar 10 01:45:53.205551 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:45:53.205570 kernel: GPT:17805311 != 125829119 Mar 10 01:45:53.205597 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:45:53.205626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:45:53.208836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:45:53.209892 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:45:53.211823 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:45:53.212625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:45:53.212813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:45:53.216309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:45:53.226884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:45:53.236692 kernel: libata version 3.00 loaded. Mar 10 01:45:53.246571 kernel: ACPI: bus type USB registered Mar 10 01:45:53.248706 kernel: usbcore: registered new interface driver usbfs Mar 10 01:45:53.251426 kernel: usbcore: registered new interface driver hub Mar 10 01:45:53.251476 kernel: usbcore: registered new device driver usb Mar 10 01:45:53.265549 kernel: AVX version of gcm_enc/dec engaged. Mar 10 01:45:53.267541 kernel: AES CTR mode by8 optimization enabled Mar 10 01:45:53.297645 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Mar 10 01:45:53.314620 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 10 01:45:53.314948 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 10 01:45:53.316641 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 10 01:45:53.320042 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:45:53.320319 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:45:53.320371 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:45:53.320620 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:45:53.324621 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 10 01:45:53.324904 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 10 01:45:53.325111 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 10 01:45:53.326570 kernel: hub 1-0:1.0: USB hub found Mar 10 01:45:53.326819 kernel: hub 1-0:1.0: 4 ports detected Mar 10 01:45:53.327033 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 10 01:45:53.327250 kernel: hub 2-0:1.0: USB hub found Mar 10 01:45:53.327466 kernel: hub 2-0:1.0: 4 ports detected Mar 10 01:45:53.334535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:45:53.435901 kernel: scsi host0: ahci Mar 10 01:45:53.436333 kernel: scsi host1: ahci Mar 10 01:45:53.436620 kernel: scsi host2: ahci Mar 10 01:45:53.436850 kernel: scsi host3: ahci Mar 10 01:45:53.437091 kernel: scsi host4: ahci Mar 10 01:45:53.437286 kernel: scsi host5: ahci Mar 10 01:45:53.437797 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 10 01:45:53.437822 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 10 01:45:53.437847 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 10 01:45:53.437864 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 10 01:45:53.437881 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 10 01:45:53.437897 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 10 01:45:53.437914 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Mar 10 01:45:53.434029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:45:53.435212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:45:53.448641 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:45:53.460550 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:45:53.471898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:45:53.481824 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:45:53.485890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:45:53.498549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:45:53.499709 disk-uuid[564]: Primary Header is updated. Mar 10 01:45:53.499709 disk-uuid[564]: Secondary Entries is updated. Mar 10 01:45:53.499709 disk-uuid[564]: Secondary Header is updated. Mar 10 01:45:53.532456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:45:53.570071 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 10 01:45:53.662380 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.662443 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.662462 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.664074 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.664551 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.667546 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 10 01:45:53.709608 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 10 01:45:53.716937 kernel: usbcore: registered new interface driver usbhid Mar 10 01:45:53.716997 kernel: usbhid: USB HID core driver Mar 10 01:45:53.725112 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 10 01:45:53.725162 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 10 01:45:54.523717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:45:54.525578 disk-uuid[566]: The operation has completed successfully. Mar 10 01:45:54.585792 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:45:54.586874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:45:54.599767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:45:54.606155 sh[588]: Success Mar 10 01:45:54.622714 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 10 01:45:54.681756 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:45:54.691677 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:45:54.694587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:45:54.717688 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:45:54.717757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:45:54.717777 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:45:54.719976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:45:54.723131 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:45:54.732918 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:45:54.734471 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:45:54.742905 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:45:54.745748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:45:54.758546 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:45:54.758617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:45:54.759748 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:45:54.767976 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:45:54.783701 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:45:54.783269 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:45:54.794879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:45:54.805757 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:45:54.902610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:45:54.912906 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:45:54.955052 systemd-networkd[769]: lo: Link UP Mar 10 01:45:54.955731 systemd-networkd[769]: lo: Gained carrier Mar 10 01:45:54.955466 ignition[680]: Ignition 2.19.0 Mar 10 01:45:54.955484 ignition[680]: Stage: fetch-offline Mar 10 01:45:54.959487 systemd-networkd[769]: Enumeration completed Mar 10 01:45:54.957289 ignition[680]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:45:54.960114 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:45:54.957360 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:45:54.962244 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:45:54.961062 ignition[680]: parsed url from cmdline: "" Mar 10 01:45:54.962249 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:45:54.961074 ignition[680]: no config URL provided Mar 10 01:45:54.964112 systemd-networkd[769]: eth0: Link UP Mar 10 01:45:54.961089 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:45:54.964131 systemd-networkd[769]: eth0: Gained carrier Mar 10 01:45:54.961120 ignition[680]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:45:54.964141 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:45:54.961131 ignition[680]: failed to fetch config: resource requires networking Mar 10 01:45:54.965963 systemd[1]: Reached target network.target - Network. Mar 10 01:45:54.961448 ignition[680]: Ignition finished successfully Mar 10 01:45:54.967018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:45:54.977895 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 10 01:45:54.989621 systemd-networkd[769]: eth0: DHCPv4 address 10.230.66.170/30, gateway 10.230.66.169 acquired from 10.230.66.169 Mar 10 01:45:55.000830 ignition[776]: Ignition 2.19.0 Mar 10 01:45:55.000849 ignition[776]: Stage: fetch Mar 10 01:45:55.001092 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:45:55.001111 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:45:55.001230 ignition[776]: parsed url from cmdline: "" Mar 10 01:45:55.001236 ignition[776]: no config URL provided Mar 10 01:45:55.001245 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:45:55.001260 ignition[776]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:45:55.001473 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 10 01:45:55.001598 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 10 01:45:55.001679 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 10 01:45:55.016981 ignition[776]: GET result: OK Mar 10 01:45:55.017141 ignition[776]: parsing config with SHA512: 1ef636991b42b32cedc5dd82b076ee272d883080da0970edd6dc1321568884801499c61eef3bbb06af746d18ae0de3d3bae41f64574d735aefe58b53d9555495 Mar 10 01:45:55.022699 unknown[776]: fetched base config from "system" Mar 10 01:45:55.022716 unknown[776]: fetched base config from "system" Mar 10 01:45:55.023167 ignition[776]: fetch: fetch complete Mar 10 01:45:55.022725 unknown[776]: fetched user config from "openstack" Mar 10 01:45:55.023175 ignition[776]: fetch: fetch passed Mar 10 01:45:55.026346 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 10 01:45:55.023243 ignition[776]: Ignition finished successfully Mar 10 01:45:55.032797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:45:55.056619 ignition[784]: Ignition 2.19.0 Mar 10 01:45:55.056639 ignition[784]: Stage: kargs Mar 10 01:45:55.057686 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:45:55.057707 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:45:55.060888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:45:55.058821 ignition[784]: kargs: kargs passed Mar 10 01:45:55.058889 ignition[784]: Ignition finished successfully Mar 10 01:45:55.075742 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:45:55.093425 ignition[790]: Ignition 2.19.0 Mar 10 01:45:55.093454 ignition[790]: Stage: disks Mar 10 01:45:55.093741 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:45:55.096120 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:45:55.093760 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:45:55.097892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:45:55.094885 ignition[790]: disks: disks passed Mar 10 01:45:55.098888 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:45:55.094953 ignition[790]: Ignition finished successfully Mar 10 01:45:55.100501 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:45:55.102066 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:45:55.103312 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:45:55.117796 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:45:55.137096 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 10 01:45:55.140587 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:45:55.147066 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:45:55.274318 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:45:55.273713 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:45:55.275858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:45:55.282684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:45:55.285384 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:45:55.286424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:45:55.289732 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 10 01:45:55.291417 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:45:55.291454 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:45:55.303575 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Mar 10 01:45:55.310561 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:45:55.310634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:45:55.310689 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:45:55.310431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:45:55.326583 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:45:55.330768 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:45:55.336271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:45:55.395621 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:45:55.405968 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:45:55.413802 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:45:55.421058 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:45:55.526587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:45:55.533673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:45:55.536720 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:45:55.550599 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:45:55.578731 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:45:55.587159 ignition[922]: INFO : Ignition 2.19.0 Mar 10 01:45:55.587159 ignition[922]: INFO : Stage: mount Mar 10 01:45:55.589503 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:45:55.589503 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:45:55.589503 ignition[922]: INFO : mount: mount passed Mar 10 01:45:55.589503 ignition[922]: INFO : Ignition finished successfully Mar 10 01:45:55.589942 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:45:55.714317 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:45:56.754795 systemd-networkd[769]: eth0: Gained IPv6LL Mar 10 01:45:57.094217 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90aa:24:19ff:fee6:42aa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90aa:24:19ff:fee6:42aa/64 assigned by NDisc. Mar 10 01:45:57.094245 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 10 01:46:02.464425 coreos-metadata[808]: Mar 10 01:46:02.464 WARN failed to locate config-drive, using the metadata service API instead Mar 10 01:46:02.486626 coreos-metadata[808]: Mar 10 01:46:02.486 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 10 01:46:02.500477 coreos-metadata[808]: Mar 10 01:46:02.500 INFO Fetch successful Mar 10 01:46:02.501551 coreos-metadata[808]: Mar 10 01:46:02.501 INFO wrote hostname srv-p0r5l.gb1.brightbox.com to /sysroot/etc/hostname Mar 10 01:46:02.503816 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 10 01:46:02.504021 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 10 01:46:02.515689 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:46:02.529715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:46:02.554557 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Mar 10 01:46:02.558568 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:46:02.562959 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:46:02.563024 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:46:02.567583 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:46:02.570438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:46:02.603858 ignition[957]: INFO : Ignition 2.19.0 Mar 10 01:46:02.605018 ignition[957]: INFO : Stage: files Mar 10 01:46:02.606578 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:46:02.606578 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:46:02.608861 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:46:02.610956 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:46:02.610956 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:46:02.615384 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:46:02.616481 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:46:02.617744 unknown[957]: wrote ssh authorized keys file for user: core Mar 10 01:46:02.618761 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:46:02.620168 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:46:02.621428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:46:02.796677 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:46:03.078580 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:46:03.078580 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:46:03.081136 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 10 01:46:03.671885 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 01:46:06.225364 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:46:06.225364 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 01:46:06.228261 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:46:06.228261 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:46:06.228261 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 01:46:06.228261 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:46:06.228261 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:46:06.236059 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:46:06.236059 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:46:06.236059 ignition[957]: INFO : files: files passed Mar 10 01:46:06.236059 ignition[957]: INFO : Ignition finished successfully Mar 10 01:46:06.230369 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:46:06.239717 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:46:06.241374 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:46:06.254424 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:46:06.254612 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:46:06.263279 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:46:06.263279 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:46:06.267121 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:46:06.268056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:46:06.269461 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:46:06.288728 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:46:06.316373 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:46:06.316615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:46:06.318917 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:46:06.320123 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:46:06.321780 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:46:06.327738 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:46:06.346680 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:46:06.359802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:46:06.371423 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:46:06.372307 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:46:06.373964 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:46:06.375378 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:46:06.375578 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:46:06.377509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:46:06.378495 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:46:06.380078 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:46:06.381375 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:46:06.382734 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:46:06.384241 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:46:06.385759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:46:06.387288 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:46:06.388726 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:46:06.390244 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:46:06.391618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:46:06.391780 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:46:06.393609 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:46:06.394562 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:46:06.395900 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:46:06.396332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:46:06.397674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:46:06.397906 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:46:06.399801 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:46:06.399983 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:46:06.401025 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:46:06.401241 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:46:06.408735 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:46:06.409426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:46:06.409629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:46:06.414764 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:46:06.417588 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:46:06.418641 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:46:06.420093 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:46:06.420264 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:46:06.430633 ignition[1010]: INFO : Ignition 2.19.0 Mar 10 01:46:06.430633 ignition[1010]: INFO : Stage: umount Mar 10 01:46:06.438125 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:46:06.438125 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 10 01:46:06.438125 ignition[1010]: INFO : umount: umount passed Mar 10 01:46:06.438125 ignition[1010]: INFO : Ignition finished successfully Mar 10 01:46:06.435232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:46:06.435385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:46:06.437957 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:46:06.444547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:46:06.446572 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:46:06.447195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:46:06.448003 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:46:06.448085 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:46:06.449075 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 10 01:46:06.449147 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 10 01:46:06.449884 systemd[1]: Stopped target network.target - Network. Mar 10 01:46:06.452219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:46:06.452285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:46:06.453922 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:46:06.455495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:46:06.459590 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:46:06.460454 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:46:06.461068 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:46:06.462668 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:46:06.462748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:46:06.463930 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:46:06.464011 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:46:06.465303 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:46:06.465373 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:46:06.466614 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:46:06.466679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:46:06.468131 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:46:06.470098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:46:06.473871 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:46:06.474667 systemd-networkd[769]: eth0: DHCPv6 lease lost Mar 10 01:46:06.474891 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:46:06.476702 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:46:06.478007 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:46:06.478207 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:46:06.481160 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:46:06.481389 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:46:06.485290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:46:06.485606 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:46:06.487729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:46:06.487801 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:46:06.495680 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:46:06.499242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:46:06.499320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:46:06.500837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:46:06.500906 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:46:06.502113 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:46:06.502177 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:46:06.503639 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:46:06.503704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:46:06.505386 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:46:06.516471 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:46:06.516741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:46:06.519456 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:46:06.519650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:46:06.521438 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:46:06.521998 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:46:06.523576 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:46:06.523630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:46:06.525103 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:46:06.525166 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:46:06.527239 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:46:06.527306 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:46:06.528623 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:46:06.528692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:46:06.535713 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:46:06.536635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:46:06.536702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:46:06.538365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:46:06.538458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:46:06.549344 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:46:06.549506 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:46:06.551392 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:46:06.561742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:46:06.571758 systemd[1]: Switching root. Mar 10 01:46:06.610171 systemd-journald[203]: Journal stopped Mar 10 01:46:08.186991 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 10 01:46:08.187091 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:46:08.187122 kernel: SELinux: policy capability open_perms=1 Mar 10 01:46:08.187141 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:46:08.187158 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:46:08.187182 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:46:08.187206 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:46:08.187242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:46:08.187259 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:46:08.187300 systemd[1]: Successfully loaded SELinux policy in 49.737ms. Mar 10 01:46:08.187331 kernel: audit: type=1403 audit(1773107166.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:46:08.187350 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.339ms. Mar 10 01:46:08.187369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:46:08.187388 systemd[1]: Detected virtualization kvm. Mar 10 01:46:08.187437 systemd[1]: Detected architecture x86-64. Mar 10 01:46:08.187459 systemd[1]: Detected first boot. Mar 10 01:46:08.187492 systemd[1]: Hostname set to . Mar 10 01:46:08.190544 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:46:08.190578 zram_generator::config[1053]: No configuration found. Mar 10 01:46:08.190599 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:46:08.190619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:46:08.190639 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:46:08.190659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:46:08.190687 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:46:08.190727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:46:08.190776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:46:08.190802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:46:08.190823 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:46:08.190854 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:46:08.190873 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:46:08.190905 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:46:08.190923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:46:08.190943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:46:08.190969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:46:08.191001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:46:08.191023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:46:08.191043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:46:08.191062 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:46:08.191082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:46:08.191107 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:46:08.191128 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:46:08.191165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:46:08.191199 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:46:08.191218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:46:08.191237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:46:08.191256 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:46:08.191277 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:46:08.191295 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:46:08.191314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:46:08.191346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:46:08.191377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:46:08.191409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:46:08.191449 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:46:08.191470 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:46:08.191494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:46:08.191513 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:46:08.191568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:08.191606 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:46:08.191628 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:46:08.191647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:46:08.191667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:46:08.191687 systemd[1]: Reached target machines.target - Containers. Mar 10 01:46:08.191706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:46:08.191725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:46:08.191765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:46:08.191786 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:46:08.191824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:46:08.191846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:46:08.191865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:46:08.191884 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:46:08.191903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:46:08.191929 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:46:08.191962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:46:08.191992 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:46:08.192011 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:46:08.192035 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:46:08.192062 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:46:08.192089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:46:08.192109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:46:08.192128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:46:08.192147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:46:08.192182 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:46:08.192209 systemd[1]: Stopped verity-setup.service. Mar 10 01:46:08.192230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:08.192249 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:46:08.192276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:46:08.192294 kernel: fuse: init (API version 7.39) Mar 10 01:46:08.192312 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:46:08.192371 systemd-journald[1149]: Collecting audit messages is disabled. Mar 10 01:46:08.192440 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:46:08.192463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:46:08.192483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:46:08.192501 kernel: loop: module loaded Mar 10 01:46:08.193606 systemd-journald[1149]: Journal started Mar 10 01:46:08.193650 systemd-journald[1149]: Runtime Journal (/run/log/journal/c1c0b790e12645b09e990b3be40733ba) is 4.7M, max 38.0M, 33.2M free. Mar 10 01:46:07.800512 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:46:07.823739 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:46:07.824580 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:46:08.196564 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:46:08.201543 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:46:08.204138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:46:08.205305 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:46:08.205592 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:46:08.206706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:46:08.206924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:46:08.208145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:46:08.208339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:46:08.210385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:46:08.210626 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:46:08.212755 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:46:08.212954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:46:08.214079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:46:08.215198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:46:08.217224 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:46:08.241460 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:46:08.260009 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:46:08.265587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:46:08.266350 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:46:08.266408 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:46:08.270495 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:46:08.294546 kernel: ACPI: bus type drm_connector registered Mar 10 01:46:08.295800 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:46:08.305708 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:46:08.306642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:46:08.308635 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:46:08.312747 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:46:08.313660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:46:08.315880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:46:08.317624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:46:08.326730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:46:08.331509 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:46:08.335781 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:46:08.341134 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:46:08.341383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:46:08.342652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:46:08.343707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:46:08.344774 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:46:08.380831 systemd-journald[1149]: Time spent on flushing to /var/log/journal/c1c0b790e12645b09e990b3be40733ba is 109.593ms for 1135 entries. Mar 10 01:46:08.380831 systemd-journald[1149]: System Journal (/var/log/journal/c1c0b790e12645b09e990b3be40733ba) is 8.0M, max 584.8M, 576.8M free. Mar 10 01:46:08.518605 systemd-journald[1149]: Received client request to flush runtime journal. Mar 10 01:46:08.518692 kernel: loop0: detected capacity change from 0 to 140768 Mar 10 01:46:08.518720 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:46:08.518742 kernel: loop1: detected capacity change from 0 to 217752 Mar 10 01:46:08.379908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:46:08.384078 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:46:08.391705 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:46:08.415868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:46:08.497683 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:46:08.500607 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:46:08.520436 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:46:08.553654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:46:08.565710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:46:08.566977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:46:08.570598 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 01:46:08.579305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:46:08.606305 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 10 01:46:08.622303 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Mar 10 01:46:08.622327 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Mar 10 01:46:08.631859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:46:08.651736 kernel: loop3: detected capacity change from 0 to 8 Mar 10 01:46:08.688403 kernel: loop4: detected capacity change from 0 to 140768 Mar 10 01:46:08.713608 kernel: loop5: detected capacity change from 0 to 217752 Mar 10 01:46:08.736813 kernel: loop6: detected capacity change from 0 to 142488 Mar 10 01:46:08.762221 kernel: loop7: detected capacity change from 0 to 8 Mar 10 01:46:08.759693 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 10 01:46:08.760559 (sd-merge)[1212]: Merged extensions into '/usr'. Mar 10 01:46:08.767301 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:46:08.767428 systemd[1]: Reloading... Mar 10 01:46:08.890604 zram_generator::config[1235]: No configuration found. Mar 10 01:46:09.097952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:46:09.151539 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:46:09.180266 systemd[1]: Reloading finished in 412 ms. Mar 10 01:46:09.208935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:46:09.212882 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:46:09.223825 systemd[1]: Starting ensure-sysext.service... Mar 10 01:46:09.235310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:46:09.254669 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:46:09.254703 systemd[1]: Reloading... Mar 10 01:46:09.276682 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:46:09.277290 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:46:09.280871 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:46:09.281304 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 10 01:46:09.281444 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 10 01:46:09.291993 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:46:09.292051 systemd-tmpfiles[1295]: Skipping /boot Mar 10 01:46:09.317909 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:46:09.317929 systemd-tmpfiles[1295]: Skipping /boot Mar 10 01:46:09.392556 zram_generator::config[1334]: No configuration found. Mar 10 01:46:09.545204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:46:09.611569 systemd[1]: Reloading finished in 356 ms. Mar 10 01:46:09.636480 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:46:09.642118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:46:09.652776 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:46:09.657714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:46:09.669745 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:46:09.675457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:46:09.677998 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:46:09.686732 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:46:09.697885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.698163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:46:09.704068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:46:09.708820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:46:09.711746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:46:09.713239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:46:09.713406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.718631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.718893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:46:09.719118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:46:09.728842 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:46:09.730614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.743599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.745010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:46:09.760045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:46:09.761989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:46:09.763850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:46:09.766511 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:46:09.776236 systemd[1]: Finished ensure-sysext.service. Mar 10 01:46:09.778430 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:46:09.789029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:46:09.790669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:46:09.794112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:46:09.794860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:46:09.797203 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:46:09.798670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:46:09.801146 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:46:09.802605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:46:09.804896 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Mar 10 01:46:09.810539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:46:09.810698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:46:09.819731 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:46:09.830720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:46:09.831399 augenrules[1415]: No rules Mar 10 01:46:09.834488 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:46:09.861660 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:46:09.875720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:46:09.878354 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:46:09.880976 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:46:09.887495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:46:09.919984 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:46:10.072294 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:46:10.083914 systemd-networkd[1427]: lo: Link UP Mar 10 01:46:10.083926 systemd-networkd[1427]: lo: Gained carrier Mar 10 01:46:10.085163 systemd-networkd[1427]: Enumeration completed Mar 10 01:46:10.085288 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:46:10.095752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:46:10.109548 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:46:10.145556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1439) Mar 10 01:46:10.175062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:46:10.176091 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:46:10.183905 systemd-resolved[1390]: Positive Trust Anchors: Mar 10 01:46:10.183927 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:46:10.183968 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:46:10.186559 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 10 01:46:10.194221 systemd-resolved[1390]: Using system hostname 'srv-p0r5l.gb1.brightbox.com'. Mar 10 01:46:10.195565 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:46:10.197617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:46:10.198468 systemd[1]: Reached target network.target - Network. Mar 10 01:46:10.199130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:46:10.237984 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:46:10.237997 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:46:10.241505 systemd-networkd[1427]: eth0: Link UP Mar 10 01:46:10.241518 systemd-networkd[1427]: eth0: Gained carrier Mar 10 01:46:10.241565 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:46:10.256927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:46:10.266357 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:46:10.268066 systemd-networkd[1427]: eth0: DHCPv4 address 10.230.66.170/30, gateway 10.230.66.169 acquired from 10.230.66.169 Mar 10 01:46:10.269221 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Mar 10 01:46:10.283830 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:46:10.284264 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:46:10.287894 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:46:10.302603 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:46:10.309546 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 10 01:46:10.412635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:46:10.570036 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:46:10.593827 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:46:10.595096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:46:10.616430 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:46:10.653164 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:46:10.655030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:46:10.655888 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:46:10.656766 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:46:10.657693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:46:10.658757 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:46:10.659668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:46:10.660451 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:46:10.661207 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:46:10.661259 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:46:10.661942 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:46:10.663419 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:46:10.666397 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:46:10.676715 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:46:10.679407 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:46:10.680933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:46:10.681746 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:46:10.682436 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:46:10.683108 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:46:10.683163 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:46:10.690608 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:46:10.695743 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 10 01:46:10.698807 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:46:10.699729 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:46:10.703173 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:46:10.713708 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:46:10.715077 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:46:10.718225 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:46:10.723675 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:46:10.727644 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:46:10.731697 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:46:10.743182 jq[1477]: false Mar 10 01:46:10.749297 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:46:10.752804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:46:10.753563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:46:10.760395 extend-filesystems[1478]: Found loop4 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found loop5 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found loop6 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found loop7 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda1 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda2 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda3 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found usr Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda4 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda6 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda7 Mar 10 01:46:10.762443 extend-filesystems[1478]: Found vda9 Mar 10 01:46:10.762443 extend-filesystems[1478]: Checking size of /dev/vda9 Mar 10 01:46:10.761720 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:46:10.772618 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:46:10.783058 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:46:10.783339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:46:10.806616 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:46:10.815419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1434) Mar 10 01:46:10.819341 extend-filesystems[1478]: Resized partition /dev/vda9 Mar 10 01:46:10.822751 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:46:10.847138 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:46:10.847475 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:46:10.853168 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:46:10.854645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:46:10.879551 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 10 01:46:10.879617 jq[1489]: true Mar 10 01:46:10.906601 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:46:10.918315 dbus-daemon[1476]: [system] SELinux support is enabled Mar 10 01:46:10.918575 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:46:10.924614 tar[1508]: linux-amd64/LICENSE Mar 10 01:46:10.924614 tar[1508]: linux-amd64/helm Mar 10 01:46:10.931252 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:46:10.931304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:46:10.932816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:46:10.932845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:46:10.947615 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 10 01:46:10.958159 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:46:10.961838 update_engine[1486]: I20260310 01:46:10.961113 1486 main.cc:92] Flatcar Update Engine starting Mar 10 01:46:10.978624 jq[1513]: true Mar 10 01:46:10.975915 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 10 01:46:10.994394 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:46:11.006741 update_engine[1486]: I20260310 01:46:10.996568 1486 update_check_scheduler.cc:74] Next update check in 11m17s Mar 10 01:46:11.004192 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Mar 10 01:46:11.004224 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:46:11.004593 systemd-logind[1485]: New seat seat0. Mar 10 01:46:11.013339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:46:11.016631 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:46:11.152920 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 10 01:46:11.153123 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 10 01:46:11.153765 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 10 01:46:11.167848 systemd[1]: Starting polkit.service - Authorization Manager... Mar 10 01:46:11.213386 polkitd[1535]: Started polkitd version 121 Mar 10 01:46:11.238874 polkitd[1535]: Loading rules from directory /etc/polkit-1/rules.d Mar 10 01:46:11.239274 polkitd[1535]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 10 01:46:11.243866 polkitd[1535]: Finished loading, compiling and executing 2 rules Mar 10 01:46:11.249023 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:46:11.251016 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 10 01:46:11.252403 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:46:11.253495 polkitd[1535]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 10 01:46:11.255472 systemd[1]: Started polkit.service - Authorization Manager. Mar 10 01:46:11.266547 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 10 01:46:11.267587 systemd[1]: Starting sshkeys.service... Mar 10 01:46:11.310565 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:46:11.310565 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 10 01:46:11.310565 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 10 01:46:11.335416 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Mar 10 01:46:11.319841 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:46:11.320132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:46:11.358113 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 10 01:46:11.369035 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 10 01:46:11.378839 systemd-hostnamed[1517]: Hostname set to (static) Mar 10 01:46:11.430610 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:46:11.517550 containerd[1511]: time="2026-03-10T01:46:11.515831077Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:46:11.596223 containerd[1511]: time="2026-03-10T01:46:11.596152993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.602986 systemd-networkd[1427]: eth0: Gained IPv6LL Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.603035915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.603094233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.603120300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.605410621Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.605501738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.606724315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.606755542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.609324035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.609395862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.609426044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:46:11.613866 containerd[1511]: time="2026-03-10T01:46:11.609443608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.610138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.609829893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.611492616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.612194663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.612695383Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.612870577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:46:11.614431 containerd[1511]: time="2026-03-10T01:46:11.613641474Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:46:11.613391 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.617721476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.617808185Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.617836848Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.617861851Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.617889186Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618082150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618377114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618560629Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618585142Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618606049Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618629314Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618668076Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618694121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.619746 containerd[1511]: time="2026-03-10T01:46:11.618726390Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618755822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618832781Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618857523Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618875285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618923067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618944221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618962931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.618981878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619013185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619031269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619048068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619078980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619096853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.620291 containerd[1511]: time="2026-03-10T01:46:11.619127900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619145498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619162692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619195849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619228469Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619267791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619289542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619306858Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619388510Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619421784Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619440065Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619458101Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.619474382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.622409508Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:46:11.622818 containerd[1511]: time="2026-03-10T01:46:11.622442853Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:46:11.623214 containerd[1511]: time="2026-03-10T01:46:11.622482732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:46:11.626563 containerd[1511]: time="2026-03-10T01:46:11.623948856Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:46:11.626563 containerd[1511]: time="2026-03-10T01:46:11.624280702Z" level=info msg="Connect containerd service" Mar 10 01:46:11.626563 containerd[1511]: time="2026-03-10T01:46:11.624374885Z" level=info msg="using legacy CRI server" Mar 10 01:46:11.626563 containerd[1511]: time="2026-03-10T01:46:11.624397017Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:46:11.626563 containerd[1511]: time="2026-03-10T01:46:11.625146056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.628125102Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.628723880Z" level=info msg="Start subscribing containerd event" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.628807183Z" level=info msg="Start recovering state" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.628924610Z" level=info msg="Start event monitor" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.629874266Z" level=info msg="Start snapshots syncer" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.629891902Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:46:11.631201 containerd[1511]: time="2026-03-10T01:46:11.629903553Z" level=info msg="Start streaming server" Mar 10 01:46:11.629040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:11.645453 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:46:11.651023 containerd[1511]: time="2026-03-10T01:46:11.647094760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:46:11.651023 containerd[1511]: time="2026-03-10T01:46:11.647208559Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:46:11.651023 containerd[1511]: time="2026-03-10T01:46:11.647332405Z" level=info msg="containerd successfully booted in 0.134289s" Mar 10 01:46:11.648224 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:46:11.690803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:46:11.934807 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:46:11.995873 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:46:12.008620 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:46:12.015120 tar[1508]: linux-amd64/README.md Mar 10 01:46:12.030900 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:46:12.032118 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:46:12.041013 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:46:12.042469 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:46:12.062678 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:46:12.076186 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:46:12.080024 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:46:12.082928 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:46:12.211421 systemd-timesyncd[1413]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Mar 10 01:46:12.211518 systemd-timesyncd[1413]: Initial clock synchronization to Tue 2026-03-10 01:46:12.576691 UTC. Mar 10 01:46:12.592490 systemd-networkd[1427]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90aa:24:19ff:fee6:42aa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90aa:24:19ff:fee6:42aa/64 assigned by NDisc. Mar 10 01:46:12.592502 systemd-networkd[1427]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 10 01:46:12.760643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:12.777293 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:46:13.296042 kubelet[1601]: E0310 01:46:13.295968 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:46:13.299424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:46:13.299759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:46:16.676464 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:46:16.683218 systemd[1]: Started sshd@0-10.230.66.170:22-68.220.241.50:42510.service - OpenSSH per-connection server daemon (68.220.241.50:42510). Mar 10 01:46:17.160399 login[1591]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 10 01:46:17.164478 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 10 01:46:17.185260 systemd-logind[1485]: New session 2 of user core. Mar 10 01:46:17.189829 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:46:17.195958 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:46:17.230974 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:46:17.239025 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:46:17.256320 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:46:17.263414 sshd[1610]: Accepted publickey for core from 68.220.241.50 port 42510 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:17.265898 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:17.273604 systemd-logind[1485]: New session 3 of user core. Mar 10 01:46:17.400279 systemd[1618]: Queued start job for default target default.target. Mar 10 01:46:17.411240 systemd[1618]: Created slice app.slice - User Application Slice. Mar 10 01:46:17.411283 systemd[1618]: Reached target paths.target - Paths. Mar 10 01:46:17.411304 systemd[1618]: Reached target timers.target - Timers. Mar 10 01:46:17.413376 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:46:17.429166 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:46:17.429372 systemd[1618]: Reached target sockets.target - Sockets. Mar 10 01:46:17.429397 systemd[1618]: Reached target basic.target - Basic System. Mar 10 01:46:17.429467 systemd[1618]: Reached target default.target - Main User Target. Mar 10 01:46:17.429564 systemd[1618]: Startup finished in 163ms. Mar 10 01:46:17.429702 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:46:17.440999 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:46:17.442684 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:46:17.867990 systemd[1]: Started sshd@1-10.230.66.170:22-68.220.241.50:42516.service - OpenSSH per-connection server daemon (68.220.241.50:42516). Mar 10 01:46:18.161723 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 10 01:46:18.168913 systemd-logind[1485]: New session 1 of user core. Mar 10 01:46:18.177871 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:46:18.206679 coreos-metadata[1475]: Mar 10 01:46:18.206 WARN failed to locate config-drive, using the metadata service API instead Mar 10 01:46:18.237469 coreos-metadata[1475]: Mar 10 01:46:18.237 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 10 01:46:18.247300 coreos-metadata[1475]: Mar 10 01:46:18.247 INFO Fetch failed with 404: resource not found Mar 10 01:46:18.247300 coreos-metadata[1475]: Mar 10 01:46:18.247 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 10 01:46:18.248663 coreos-metadata[1475]: Mar 10 01:46:18.248 INFO Fetch successful Mar 10 01:46:18.249624 coreos-metadata[1475]: Mar 10 01:46:18.249 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 10 01:46:18.265158 coreos-metadata[1475]: Mar 10 01:46:18.265 INFO Fetch successful Mar 10 01:46:18.265158 coreos-metadata[1475]: Mar 10 01:46:18.265 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 10 01:46:18.280497 coreos-metadata[1475]: Mar 10 01:46:18.280 INFO Fetch successful Mar 10 01:46:18.280497 coreos-metadata[1475]: Mar 10 01:46:18.280 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 10 01:46:18.298164 coreos-metadata[1475]: Mar 10 01:46:18.297 INFO Fetch successful Mar 10 01:46:18.298164 coreos-metadata[1475]: Mar 10 01:46:18.298 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 10 01:46:18.436706 sshd[1639]: Accepted publickey for core from 68.220.241.50 port 42516 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:18.439401 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:18.445766 systemd-logind[1485]: New session 4 of user core. Mar 10 01:46:18.456851 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:46:18.480024 coreos-metadata[1555]: Mar 10 01:46:18.479 WARN failed to locate config-drive, using the metadata service API instead Mar 10 01:46:18.502646 coreos-metadata[1555]: Mar 10 01:46:18.502 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 10 01:46:18.536320 coreos-metadata[1475]: Mar 10 01:46:18.536 INFO Fetch successful Mar 10 01:46:18.541077 coreos-metadata[1555]: Mar 10 01:46:18.540 INFO Fetch successful Mar 10 01:46:18.542765 coreos-metadata[1555]: Mar 10 01:46:18.542 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 10 01:46:18.573331 coreos-metadata[1555]: Mar 10 01:46:18.573 INFO Fetch successful Mar 10 01:46:18.575829 unknown[1555]: wrote ssh authorized keys file for user: core Mar 10 01:46:18.579527 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 10 01:46:18.581107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:46:18.597493 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:46:18.598384 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 10 01:46:18.601223 systemd[1]: Finished sshkeys.service. Mar 10 01:46:18.604567 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:46:18.606678 systemd[1]: Startup finished in 1.437s (kernel) + 15.239s (initrd) + 11.670s (userspace) = 28.348s. Mar 10 01:46:18.843733 sshd[1639]: pam_unix(sshd:session): session closed for user core Mar 10 01:46:18.847510 systemd[1]: sshd@1-10.230.66.170:22-68.220.241.50:42516.service: Deactivated successfully. Mar 10 01:46:18.850119 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:46:18.852157 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:46:18.853825 systemd-logind[1485]: Removed session 4. Mar 10 01:46:18.953918 systemd[1]: Started sshd@2-10.230.66.170:22-68.220.241.50:42524.service - OpenSSH per-connection server daemon (68.220.241.50:42524). Mar 10 01:46:19.545550 sshd[1669]: Accepted publickey for core from 68.220.241.50 port 42524 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:19.548371 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:19.554150 systemd-logind[1485]: New session 5 of user core. Mar 10 01:46:19.564819 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:46:19.957395 sshd[1669]: pam_unix(sshd:session): session closed for user core Mar 10 01:46:19.960937 systemd[1]: sshd@2-10.230.66.170:22-68.220.241.50:42524.service: Deactivated successfully. Mar 10 01:46:19.963758 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:46:19.965985 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:46:19.967644 systemd-logind[1485]: Removed session 5. Mar 10 01:46:20.074308 systemd[1]: Started sshd@3-10.230.66.170:22-68.220.241.50:42540.service - OpenSSH per-connection server daemon (68.220.241.50:42540). Mar 10 01:46:20.650629 sshd[1676]: Accepted publickey for core from 68.220.241.50 port 42540 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:20.651418 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:20.657376 systemd-logind[1485]: New session 6 of user core. Mar 10 01:46:20.665802 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:46:21.066426 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 10 01:46:21.072243 systemd[1]: sshd@3-10.230.66.170:22-68.220.241.50:42540.service: Deactivated successfully. Mar 10 01:46:21.074277 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:46:21.075344 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:46:21.077194 systemd-logind[1485]: Removed session 6. Mar 10 01:46:21.181922 systemd[1]: Started sshd@4-10.230.66.170:22-68.220.241.50:42548.service - OpenSSH per-connection server daemon (68.220.241.50:42548). Mar 10 01:46:21.758575 sshd[1683]: Accepted publickey for core from 68.220.241.50 port 42548 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:21.760428 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:21.768649 systemd-logind[1485]: New session 7 of user core. Mar 10 01:46:21.778803 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:46:22.096494 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:46:22.097006 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:46:22.114628 sudo[1686]: pam_unix(sudo:session): session closed for user root Mar 10 01:46:22.207690 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 10 01:46:22.212792 systemd[1]: sshd@4-10.230.66.170:22-68.220.241.50:42548.service: Deactivated successfully. Mar 10 01:46:22.215325 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:46:22.217527 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:46:22.219227 systemd-logind[1485]: Removed session 7. Mar 10 01:46:22.308161 systemd[1]: Started sshd@5-10.230.66.170:22-68.220.241.50:49114.service - OpenSSH per-connection server daemon (68.220.241.50:49114). Mar 10 01:46:22.870401 sshd[1691]: Accepted publickey for core from 68.220.241.50 port 49114 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:22.872569 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:22.879187 systemd-logind[1485]: New session 8 of user core. Mar 10 01:46:22.885740 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:46:23.190444 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:46:23.190956 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:46:23.196320 sudo[1695]: pam_unix(sudo:session): session closed for user root Mar 10 01:46:23.203935 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:46:23.204395 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:46:23.225886 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:46:23.228177 auditctl[1698]: No rules Mar 10 01:46:23.229542 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:46:23.230266 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:46:23.245581 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:46:23.277763 augenrules[1716]: No rules Mar 10 01:46:23.278747 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:46:23.281279 sudo[1694]: pam_unix(sudo:session): session closed for user root Mar 10 01:46:23.335839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:46:23.347093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:23.370191 sshd[1691]: pam_unix(sshd:session): session closed for user core Mar 10 01:46:23.378606 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:46:23.379010 systemd[1]: sshd@5-10.230.66.170:22-68.220.241.50:49114.service: Deactivated successfully. Mar 10 01:46:23.382125 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:46:23.384080 systemd-logind[1485]: Removed session 8. Mar 10 01:46:23.468002 systemd[1]: Started sshd@6-10.230.66.170:22-68.220.241.50:49124.service - OpenSSH per-connection server daemon (68.220.241.50:49124). Mar 10 01:46:23.588527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:23.596026 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:46:23.648678 kubelet[1733]: E0310 01:46:23.648565 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:46:23.653503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:46:23.653802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:46:24.035532 sshd[1727]: Accepted publickey for core from 68.220.241.50 port 49124 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:46:24.038189 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:46:24.046287 systemd-logind[1485]: New session 9 of user core. Mar 10 01:46:24.051794 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:46:24.348519 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:46:24.349069 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:46:24.806859 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:46:24.819049 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:46:25.239958 dockerd[1758]: time="2026-03-10T01:46:25.239842135Z" level=info msg="Starting up" Mar 10 01:46:25.382941 dockerd[1758]: time="2026-03-10T01:46:25.382615415Z" level=info msg="Loading containers: start." Mar 10 01:46:25.518650 kernel: Initializing XFRM netlink socket Mar 10 01:46:25.624171 systemd-networkd[1427]: docker0: Link UP Mar 10 01:46:25.645593 dockerd[1758]: time="2026-03-10T01:46:25.644609654Z" level=info msg="Loading containers: done." Mar 10 01:46:25.662298 dockerd[1758]: time="2026-03-10T01:46:25.662245039Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:46:25.662824 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck338819472-merged.mount: Deactivated successfully. Mar 10 01:46:25.664025 dockerd[1758]: time="2026-03-10T01:46:25.663955787Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:46:25.664206 dockerd[1758]: time="2026-03-10T01:46:25.664172771Z" level=info msg="Daemon has completed initialization" Mar 10 01:46:25.701815 dockerd[1758]: time="2026-03-10T01:46:25.700945738Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:46:25.701307 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:46:26.312236 containerd[1511]: time="2026-03-10T01:46:26.311564903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 10 01:46:27.471381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150775075.mount: Deactivated successfully. Mar 10 01:46:29.163565 containerd[1511]: time="2026-03-10T01:46:29.161952349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:29.163565 containerd[1511]: time="2026-03-10T01:46:29.163156948Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696475" Mar 10 01:46:29.164171 containerd[1511]: time="2026-03-10T01:46:29.163520777Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:29.168168 containerd[1511]: time="2026-03-10T01:46:29.168113294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:29.170629 containerd[1511]: time="2026-03-10T01:46:29.170553434Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.858894956s" Mar 10 01:46:29.170629 containerd[1511]: time="2026-03-10T01:46:29.170617049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 10 01:46:29.171996 containerd[1511]: time="2026-03-10T01:46:29.171783177Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 10 01:46:31.034595 containerd[1511]: time="2026-03-10T01:46:31.034518479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:31.036547 containerd[1511]: time="2026-03-10T01:46:31.035815320Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450708" Mar 10 01:46:31.036721 containerd[1511]: time="2026-03-10T01:46:31.036690038Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:31.041393 containerd[1511]: time="2026-03-10T01:46:31.041334311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:31.043106 containerd[1511]: time="2026-03-10T01:46:31.043064802Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.871240682s" Mar 10 01:46:31.043381 containerd[1511]: time="2026-03-10T01:46:31.043209060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 10 01:46:31.043861 containerd[1511]: time="2026-03-10T01:46:31.043834275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 10 01:46:32.432838 containerd[1511]: time="2026-03-10T01:46:32.432763316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:32.434410 containerd[1511]: time="2026-03-10T01:46:32.434359602Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548437" Mar 10 01:46:32.435163 containerd[1511]: time="2026-03-10T01:46:32.435131391Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:32.439017 containerd[1511]: time="2026-03-10T01:46:32.438981866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:32.440732 containerd[1511]: time="2026-03-10T01:46:32.440698118Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.39674409s" Mar 10 01:46:32.440863 containerd[1511]: time="2026-03-10T01:46:32.440837245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 10 01:46:32.442260 containerd[1511]: time="2026-03-10T01:46:32.442220561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 10 01:46:33.835811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:46:33.844810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:33.999292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071095199.mount: Deactivated successfully. Mar 10 01:46:34.027248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:34.034701 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:46:34.161260 kubelet[1976]: E0310 01:46:34.160766 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:46:34.164887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:46:34.165162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:46:34.590724 containerd[1511]: time="2026-03-10T01:46:34.590646838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:34.592091 containerd[1511]: time="2026-03-10T01:46:34.591838461Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685320" Mar 10 01:46:34.592932 containerd[1511]: time="2026-03-10T01:46:34.592879557Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:34.595716 containerd[1511]: time="2026-03-10T01:46:34.595680905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:34.596833 containerd[1511]: time="2026-03-10T01:46:34.596786193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.154519721s" Mar 10 01:46:34.597091 containerd[1511]: time="2026-03-10T01:46:34.596933815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 10 01:46:34.598015 containerd[1511]: time="2026-03-10T01:46:34.597983382Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 10 01:46:35.153957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595708883.mount: Deactivated successfully. Mar 10 01:46:37.654560 containerd[1511]: time="2026-03-10T01:46:37.654438677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:37.656799 containerd[1511]: time="2026-03-10T01:46:37.656118542Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556550" Mar 10 01:46:37.659583 containerd[1511]: time="2026-03-10T01:46:37.658257789Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:37.664709 containerd[1511]: time="2026-03-10T01:46:37.664107380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:37.666786 containerd[1511]: time="2026-03-10T01:46:37.666730287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.068708012s" Mar 10 01:46:37.666958 containerd[1511]: time="2026-03-10T01:46:37.666930756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 10 01:46:37.667682 containerd[1511]: time="2026-03-10T01:46:37.667653201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 01:46:38.281848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535284472.mount: Deactivated successfully. Mar 10 01:46:38.286166 containerd[1511]: time="2026-03-10T01:46:38.286114517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:38.287238 containerd[1511]: time="2026-03-10T01:46:38.287175454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 10 01:46:38.287914 containerd[1511]: time="2026-03-10T01:46:38.287559612Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:38.290575 containerd[1511]: time="2026-03-10T01:46:38.290513906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:38.291791 containerd[1511]: time="2026-03-10T01:46:38.291753915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 623.971185ms" Mar 10 01:46:38.291899 containerd[1511]: time="2026-03-10T01:46:38.291801271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 01:46:38.293136 containerd[1511]: time="2026-03-10T01:46:38.292944118Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 10 01:46:38.850299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906658857.mount: Deactivated successfully. Mar 10 01:46:39.988045 containerd[1511]: time="2026-03-10T01:46:39.987944919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:39.990315 containerd[1511]: time="2026-03-10T01:46:39.989260098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630330" Mar 10 01:46:39.991707 containerd[1511]: time="2026-03-10T01:46:39.991673770Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:39.995005 containerd[1511]: time="2026-03-10T01:46:39.994967787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:46:39.996494 containerd[1511]: time="2026-03-10T01:46:39.996454209Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.70346602s" Mar 10 01:46:39.996580 containerd[1511]: time="2026-03-10T01:46:39.996498306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 10 01:46:42.144443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:42.151959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:42.198655 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit session-9.scope)... Mar 10 01:46:42.199001 systemd[1]: Reloading... Mar 10 01:46:42.336557 zram_generator::config[2171]: No configuration found. Mar 10 01:46:42.519244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:46:42.629624 systemd[1]: Reloading finished in 429 ms. Mar 10 01:46:42.679156 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 10 01:46:42.715054 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:42.720292 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:46:42.720759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:42.736009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:42.879609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:42.897122 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:46:43.024502 kubelet[2247]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:46:43.216042 kubelet[2247]: I0310 01:46:43.215658 2247 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:46:43.216042 kubelet[2247]: I0310 01:46:43.215740 2247 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:46:43.216042 kubelet[2247]: I0310 01:46:43.215799 2247 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:46:43.216042 kubelet[2247]: I0310 01:46:43.215809 2247 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:46:43.216800 kubelet[2247]: I0310 01:46:43.216760 2247 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:46:43.229705 kubelet[2247]: I0310 01:46:43.229418 2247 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:46:43.230242 kubelet[2247]: E0310 01:46:43.230205 2247 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.66.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.66.170:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:46:43.233447 kubelet[2247]: E0310 01:46:43.233412 2247 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:46:43.233584 kubelet[2247]: I0310 01:46:43.233511 2247 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:46:43.239248 kubelet[2247]: I0310 01:46:43.239224 2247 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:46:43.241250 kubelet[2247]: I0310 01:46:43.241183 2247 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:46:43.241538 kubelet[2247]: I0310 01:46:43.241234 2247 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p0r5l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:46:43.241778 kubelet[2247]: I0310 01:46:43.241568 2247 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:46:43.241778 kubelet[2247]: I0310 01:46:43.241597 2247 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:46:43.241778 kubelet[2247]: I0310 01:46:43.241736 2247 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:46:43.243305 kubelet[2247]: I0310 01:46:43.243284 2247 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:46:43.243639 kubelet[2247]: I0310 01:46:43.243620 2247 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:46:43.243710 kubelet[2247]: I0310 01:46:43.243647 2247 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:46:43.243772 kubelet[2247]: I0310 01:46:43.243710 2247 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:46:43.243772 kubelet[2247]: I0310 01:46:43.243733 2247 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:46:43.246461 kubelet[2247]: I0310 01:46:43.246433 2247 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:46:43.249320 kubelet[2247]: I0310 01:46:43.249239 2247 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:46:43.249320 kubelet[2247]: I0310 01:46:43.249282 2247 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:46:43.249437 kubelet[2247]: W0310 01:46:43.249384 2247 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:46:43.254099 kubelet[2247]: I0310 01:46:43.254075 2247 server.go:1257] "Started kubelet" Mar 10 01:46:43.256575 kubelet[2247]: I0310 01:46:43.256327 2247 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:46:43.261762 kubelet[2247]: E0310 01:46:43.260406 2247 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.170:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.170:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-p0r5l.gb1.brightbox.com.189b579b66a47b93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-p0r5l.gb1.brightbox.com,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-p0r5l.gb1.brightbox.com,},FirstTimestamp:2026-03-10 01:46:43.254025107 +0000 UTC m=+0.286791735,LastTimestamp:2026-03-10 01:46:43.254025107 +0000 UTC m=+0.286791735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-p0r5l.gb1.brightbox.com,}" Mar 10 01:46:43.262899 kubelet[2247]: I0310 01:46:43.262855 2247 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:46:43.264428 kubelet[2247]: I0310 01:46:43.264388 2247 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:46:43.269590 kubelet[2247]: I0310 01:46:43.269300 2247 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:46:43.269590 kubelet[2247]: I0310 01:46:43.269392 2247 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:46:43.269737 kubelet[2247]: I0310 01:46:43.269711 2247 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:46:43.269916 kubelet[2247]: I0310 01:46:43.269887 2247 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:46:43.272554 kubelet[2247]: I0310 01:46:43.270667 2247 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:46:43.272554 kubelet[2247]: E0310 01:46:43.270747 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:43.276120 kubelet[2247]: I0310 01:46:43.276098 2247 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:46:43.276342 kubelet[2247]: I0310 01:46:43.276323 2247 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:46:43.278211 kubelet[2247]: E0310 01:46:43.278143 2247 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0r5l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.170:6443: connect: connection refused" interval="200ms" Mar 10 01:46:43.278992 kubelet[2247]: I0310 01:46:43.278966 2247 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:46:43.279234 kubelet[2247]: I0310 01:46:43.279201 2247 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:46:43.281072 kubelet[2247]: E0310 01:46:43.281044 2247 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:46:43.282553 kubelet[2247]: I0310 01:46:43.282506 2247 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:46:43.292727 kubelet[2247]: I0310 01:46:43.292677 2247 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:46:43.296690 kubelet[2247]: I0310 01:46:43.296665 2247 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:46:43.296828 kubelet[2247]: I0310 01:46:43.296807 2247 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:46:43.297688 kubelet[2247]: I0310 01:46:43.297667 2247 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:46:43.297890 kubelet[2247]: E0310 01:46:43.297845 2247 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:46:43.319612 kubelet[2247]: I0310 01:46:43.319582 2247 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:46:43.319808 kubelet[2247]: I0310 01:46:43.319787 2247 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:46:43.319900 kubelet[2247]: I0310 01:46:43.319884 2247 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:46:43.322160 kubelet[2247]: I0310 01:46:43.322141 2247 policy_none.go:50] "Start" Mar 10 01:46:43.322301 kubelet[2247]: I0310 01:46:43.322282 2247 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:46:43.322450 kubelet[2247]: I0310 01:46:43.322431 2247 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:46:43.323964 kubelet[2247]: I0310 01:46:43.323945 2247 policy_none.go:44] "Start" Mar 10 01:46:43.330678 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:46:43.343585 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:46:43.355629 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:46:43.358285 kubelet[2247]: E0310 01:46:43.357418 2247 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:46:43.358285 kubelet[2247]: I0310 01:46:43.357711 2247 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:46:43.358285 kubelet[2247]: I0310 01:46:43.357736 2247 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:46:43.358285 kubelet[2247]: I0310 01:46:43.358133 2247 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:46:43.360329 kubelet[2247]: E0310 01:46:43.360294 2247 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:46:43.360452 kubelet[2247]: E0310 01:46:43.360372 2247 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:43.416892 systemd[1]: Created slice kubepods-burstable-pod348573be95c5522c47c31fb58b9442f0.slice - libcontainer container kubepods-burstable-pod348573be95c5522c47c31fb58b9442f0.slice. Mar 10 01:46:43.426086 kubelet[2247]: E0310 01:46:43.425666 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.433154 systemd[1]: Created slice kubepods-burstable-pod585a9b5dbae7ab7fc8ae3af1f453dd78.slice - libcontainer container kubepods-burstable-pod585a9b5dbae7ab7fc8ae3af1f453dd78.slice. Mar 10 01:46:43.441500 kubelet[2247]: E0310 01:46:43.441464 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.444434 systemd[1]: Created slice kubepods-burstable-podacfe51b7910c122b5fa76acae247b564.slice - libcontainer container kubepods-burstable-podacfe51b7910c122b5fa76acae247b564.slice. Mar 10 01:46:43.447500 kubelet[2247]: E0310 01:46:43.447456 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.461491 kubelet[2247]: I0310 01:46:43.460988 2247 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.461491 kubelet[2247]: E0310 01:46:43.461451 2247 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.66.170:6443/api/v1/nodes\": dial tcp 10.230.66.170:6443: connect: connection refused" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.481558 kubelet[2247]: E0310 01:46:43.479810 2247 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0r5l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.170:6443: connect: connection refused" interval="400ms" Mar 10 01:46:43.577507 kubelet[2247]: I0310 01:46:43.577335 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-k8s-certs\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577507 kubelet[2247]: I0310 01:46:43.577395 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-kubeconfig\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577507 kubelet[2247]: I0310 01:46:43.577427 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577507 kubelet[2247]: I0310 01:46:43.577470 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-ca-certs\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577507 kubelet[2247]: I0310 01:46:43.577511 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-k8s-certs\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577879 kubelet[2247]: I0310 01:46:43.577563 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-ca-certs\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577879 kubelet[2247]: I0310 01:46:43.577601 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-flexvolume-dir\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577879 kubelet[2247]: I0310 01:46:43.577661 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acfe51b7910c122b5fa76acae247b564-kubeconfig\") pod \"kube-scheduler-srv-p0r5l.gb1.brightbox.com\" (UID: \"acfe51b7910c122b5fa76acae247b564\") " pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.577879 kubelet[2247]: I0310 01:46:43.577694 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.665427 kubelet[2247]: I0310 01:46:43.664997 2247 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.665427 kubelet[2247]: E0310 01:46:43.665368 2247 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.66.170:6443/api/v1/nodes\": dial tcp 10.230.66.170:6443: connect: connection refused" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:43.730301 containerd[1511]: time="2026-03-10T01:46:43.730246221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p0r5l.gb1.brightbox.com,Uid:348573be95c5522c47c31fb58b9442f0,Namespace:kube-system,Attempt:0,}" Mar 10 01:46:43.745323 containerd[1511]: time="2026-03-10T01:46:43.744918587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p0r5l.gb1.brightbox.com,Uid:585a9b5dbae7ab7fc8ae3af1f453dd78,Namespace:kube-system,Attempt:0,}" Mar 10 01:46:43.749775 containerd[1511]: time="2026-03-10T01:46:43.749715226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p0r5l.gb1.brightbox.com,Uid:acfe51b7910c122b5fa76acae247b564,Namespace:kube-system,Attempt:0,}" Mar 10 01:46:43.883058 kubelet[2247]: E0310 01:46:43.882997 2247 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0r5l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.170:6443: connect: connection refused" interval="800ms" Mar 10 01:46:44.068909 kubelet[2247]: I0310 01:46:44.068366 2247 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:44.069995 kubelet[2247]: E0310 01:46:44.069947 2247 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.66.170:6443/api/v1/nodes\": dial tcp 10.230.66.170:6443: connect: connection refused" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:44.473508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866288058.mount: Deactivated successfully. Mar 10 01:46:44.478936 containerd[1511]: time="2026-03-10T01:46:44.478859043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:46:44.481857 containerd[1511]: time="2026-03-10T01:46:44.481822255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:46:44.484276 containerd[1511]: time="2026-03-10T01:46:44.484233908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 10 01:46:44.484703 containerd[1511]: time="2026-03-10T01:46:44.484645504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:46:44.487289 containerd[1511]: time="2026-03-10T01:46:44.486175599Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:46:44.487289 containerd[1511]: time="2026-03-10T01:46:44.487026962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:46:44.487842 containerd[1511]: time="2026-03-10T01:46:44.487807887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:46:44.494842 containerd[1511]: time="2026-03-10T01:46:44.494811981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:46:44.496338 containerd[1511]: time="2026-03-10T01:46:44.496307291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 746.51228ms" Mar 10 01:46:44.500733 containerd[1511]: time="2026-03-10T01:46:44.500700333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 770.341483ms" Mar 10 01:46:44.504059 containerd[1511]: time="2026-03-10T01:46:44.504022448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 759.010195ms" Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671834480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671909678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671931071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671576054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671676435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671718649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.672188 containerd[1511]: time="2026-03-10T01:46:44.671944800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.674551 containerd[1511]: time="2026-03-10T01:46:44.673865092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.683784 kubelet[2247]: E0310 01:46:44.683696 2247 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0r5l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.170:6443: connect: connection refused" interval="1.6s" Mar 10 01:46:44.686655 containerd[1511]: time="2026-03-10T01:46:44.684687106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:46:44.686655 containerd[1511]: time="2026-03-10T01:46:44.684767318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:46:44.686655 containerd[1511]: time="2026-03-10T01:46:44.684811291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.689556 containerd[1511]: time="2026-03-10T01:46:44.688297526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:44.722763 systemd[1]: Started cri-containerd-ad3c21275e7f925b2950967a324a13076b5b0eb8e476ffc6d636bb7fc72fe164.scope - libcontainer container ad3c21275e7f925b2950967a324a13076b5b0eb8e476ffc6d636bb7fc72fe164. Mar 10 01:46:44.735708 systemd[1]: Started cri-containerd-412d1149e515e770512441aae821ffbada19633be57d2d622585381b0f0d45ed.scope - libcontainer container 412d1149e515e770512441aae821ffbada19633be57d2d622585381b0f0d45ed. Mar 10 01:46:44.741060 systemd[1]: Started cri-containerd-cbe1857a5ca4faf5e6ee089054616eb6e350ce41f1ae92a20a01951dbb47e96f.scope - libcontainer container cbe1857a5ca4faf5e6ee089054616eb6e350ce41f1ae92a20a01951dbb47e96f. Mar 10 01:46:44.818349 containerd[1511]: time="2026-03-10T01:46:44.818295405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p0r5l.gb1.brightbox.com,Uid:348573be95c5522c47c31fb58b9442f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3c21275e7f925b2950967a324a13076b5b0eb8e476ffc6d636bb7fc72fe164\"" Mar 10 01:46:44.839662 containerd[1511]: time="2026-03-10T01:46:44.839390585Z" level=info msg="CreateContainer within sandbox \"ad3c21275e7f925b2950967a324a13076b5b0eb8e476ffc6d636bb7fc72fe164\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:46:44.861656 containerd[1511]: time="2026-03-10T01:46:44.861604320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p0r5l.gb1.brightbox.com,Uid:585a9b5dbae7ab7fc8ae3af1f453dd78,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbe1857a5ca4faf5e6ee089054616eb6e350ce41f1ae92a20a01951dbb47e96f\"" Mar 10 01:46:44.869862 containerd[1511]: time="2026-03-10T01:46:44.869706002Z" level=info msg="CreateContainer within sandbox \"cbe1857a5ca4faf5e6ee089054616eb6e350ce41f1ae92a20a01951dbb47e96f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:46:44.872712 kubelet[2247]: I0310 01:46:44.872265 2247 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:44.872712 kubelet[2247]: E0310 01:46:44.872678 2247 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.66.170:6443/api/v1/nodes\": dial tcp 10.230.66.170:6443: connect: connection refused" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:44.877027 containerd[1511]: time="2026-03-10T01:46:44.876993956Z" level=info msg="CreateContainer within sandbox \"ad3c21275e7f925b2950967a324a13076b5b0eb8e476ffc6d636bb7fc72fe164\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"473c4ae9921ae21b157a74a38184406537b17212f394a8c661d70b5fe416e60e\"" Mar 10 01:46:44.877814 containerd[1511]: time="2026-03-10T01:46:44.877684000Z" level=info msg="StartContainer for \"473c4ae9921ae21b157a74a38184406537b17212f394a8c661d70b5fe416e60e\"" Mar 10 01:46:44.881679 containerd[1511]: time="2026-03-10T01:46:44.881645367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p0r5l.gb1.brightbox.com,Uid:acfe51b7910c122b5fa76acae247b564,Namespace:kube-system,Attempt:0,} returns sandbox id \"412d1149e515e770512441aae821ffbada19633be57d2d622585381b0f0d45ed\"" Mar 10 01:46:44.888085 containerd[1511]: time="2026-03-10T01:46:44.888026736Z" level=info msg="CreateContainer within sandbox \"412d1149e515e770512441aae821ffbada19633be57d2d622585381b0f0d45ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:46:44.890668 containerd[1511]: time="2026-03-10T01:46:44.890555101Z" level=info msg="CreateContainer within sandbox \"cbe1857a5ca4faf5e6ee089054616eb6e350ce41f1ae92a20a01951dbb47e96f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"724b450f506b91bce3949faa3003144d8e5d3f770b0561e0f6016e32ecaaab51\"" Mar 10 01:46:44.891141 containerd[1511]: time="2026-03-10T01:46:44.891039813Z" level=info msg="StartContainer for \"724b450f506b91bce3949faa3003144d8e5d3f770b0561e0f6016e32ecaaab51\"" Mar 10 01:46:44.907639 containerd[1511]: time="2026-03-10T01:46:44.907596312Z" level=info msg="CreateContainer within sandbox \"412d1149e515e770512441aae821ffbada19633be57d2d622585381b0f0d45ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aedcd9cf50accfc917f05dc00d256298893700a963595dfafe3b39455a7eee0b\"" Mar 10 01:46:44.909596 containerd[1511]: time="2026-03-10T01:46:44.908975897Z" level=info msg="StartContainer for \"aedcd9cf50accfc917f05dc00d256298893700a963595dfafe3b39455a7eee0b\"" Mar 10 01:46:44.938935 systemd[1]: Started cri-containerd-473c4ae9921ae21b157a74a38184406537b17212f394a8c661d70b5fe416e60e.scope - libcontainer container 473c4ae9921ae21b157a74a38184406537b17212f394a8c661d70b5fe416e60e. Mar 10 01:46:44.955749 systemd[1]: Started cri-containerd-724b450f506b91bce3949faa3003144d8e5d3f770b0561e0f6016e32ecaaab51.scope - libcontainer container 724b450f506b91bce3949faa3003144d8e5d3f770b0561e0f6016e32ecaaab51. Mar 10 01:46:44.974714 systemd[1]: Started cri-containerd-aedcd9cf50accfc917f05dc00d256298893700a963595dfafe3b39455a7eee0b.scope - libcontainer container aedcd9cf50accfc917f05dc00d256298893700a963595dfafe3b39455a7eee0b. Mar 10 01:46:45.058153 containerd[1511]: time="2026-03-10T01:46:45.056935223Z" level=info msg="StartContainer for \"473c4ae9921ae21b157a74a38184406537b17212f394a8c661d70b5fe416e60e\" returns successfully" Mar 10 01:46:45.058153 containerd[1511]: time="2026-03-10T01:46:45.057072405Z" level=info msg="StartContainer for \"724b450f506b91bce3949faa3003144d8e5d3f770b0561e0f6016e32ecaaab51\" returns successfully" Mar 10 01:46:45.104307 containerd[1511]: time="2026-03-10T01:46:45.103837686Z" level=info msg="StartContainer for \"aedcd9cf50accfc917f05dc00d256298893700a963595dfafe3b39455a7eee0b\" returns successfully" Mar 10 01:46:45.332266 kubelet[2247]: E0310 01:46:45.332126 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:45.335910 kubelet[2247]: E0310 01:46:45.335324 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:45.339553 kubelet[2247]: E0310 01:46:45.339496 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:46.342016 kubelet[2247]: E0310 01:46:46.341976 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:46.342635 kubelet[2247]: E0310 01:46:46.342600 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:46.475434 kubelet[2247]: I0310 01:46:46.475395 2247 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.195018 kubelet[2247]: E0310 01:46:47.194950 2247 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.252917 kubelet[2247]: I0310 01:46:47.252871 2247 kubelet_node_status.go:77] "Successfully registered node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.252917 kubelet[2247]: E0310 01:46:47.252917 2247 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"srv-p0r5l.gb1.brightbox.com\": node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.294844 kubelet[2247]: E0310 01:46:47.294782 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.344815 kubelet[2247]: E0310 01:46:47.344091 2247 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.395964 kubelet[2247]: E0310 01:46:47.395870 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.497378 kubelet[2247]: E0310 01:46:47.496879 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.597087 kubelet[2247]: E0310 01:46:47.597021 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.698138 kubelet[2247]: E0310 01:46:47.698077 2247 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:47.772221 kubelet[2247]: I0310 01:46:47.771792 2247 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.779400 kubelet[2247]: E0310 01:46:47.779112 2247 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-p0r5l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.779400 kubelet[2247]: I0310 01:46:47.779143 2247 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.781265 kubelet[2247]: E0310 01:46:47.781218 2247 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.781265 kubelet[2247]: I0310 01:46:47.781266 2247 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.783384 kubelet[2247]: E0310 01:46:47.783339 2247 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.874828 kubelet[2247]: I0310 01:46:47.874784 2247 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:47.877586 kubelet[2247]: E0310 01:46:47.877529 2247 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:48.249045 kubelet[2247]: I0310 01:46:48.249002 2247 apiserver.go:52] "Watching apiserver" Mar 10 01:46:48.276745 kubelet[2247]: I0310 01:46:48.276706 2247 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:46:49.461161 systemd[1]: Reloading requested from client PID 2540 ('systemctl') (unit session-9.scope)... Mar 10 01:46:49.461707 systemd[1]: Reloading... Mar 10 01:46:49.576612 zram_generator::config[2575]: No configuration found. Mar 10 01:46:49.766037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:46:49.897906 systemd[1]: Reloading finished in 435 ms. Mar 10 01:46:49.967369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:49.983941 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:46:49.984644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:49.991921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:46:50.225098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:46:50.238989 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:46:50.332579 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:46:50.348079 kubelet[2643]: I0310 01:46:50.347939 2643 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:46:50.348079 kubelet[2643]: I0310 01:46:50.347998 2643 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:46:50.348079 kubelet[2643]: I0310 01:46:50.348020 2643 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:46:50.348079 kubelet[2643]: I0310 01:46:50.348029 2643 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:46:50.348570 kubelet[2643]: I0310 01:46:50.348538 2643 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:46:50.350219 kubelet[2643]: I0310 01:46:50.350188 2643 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:46:50.354929 kubelet[2643]: I0310 01:46:50.353816 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:46:50.359376 kubelet[2643]: E0310 01:46:50.359348 2643 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:46:50.359579 kubelet[2643]: I0310 01:46:50.359519 2643 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:46:50.370301 kubelet[2643]: I0310 01:46:50.370274 2643 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:46:50.372302 kubelet[2643]: I0310 01:46:50.372250 2643 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:46:50.372610 kubelet[2643]: I0310 01:46:50.372398 2643 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p0r5l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:46:50.372849 kubelet[2643]: I0310 01:46:50.372829 2643 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:46:50.372967 kubelet[2643]: I0310 01:46:50.372949 2643 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:46:50.373087 kubelet[2643]: I0310 01:46:50.373060 2643 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:46:50.373456 kubelet[2643]: I0310 01:46:50.373438 2643 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:46:50.373921 kubelet[2643]: I0310 01:46:50.373903 2643 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:46:50.374081 kubelet[2643]: I0310 01:46:50.374062 2643 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:46:50.374220 kubelet[2643]: I0310 01:46:50.374203 2643 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:46:50.374387 kubelet[2643]: I0310 01:46:50.374315 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:46:50.380282 kubelet[2643]: I0310 01:46:50.380258 2643 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:46:50.383566 kubelet[2643]: I0310 01:46:50.383374 2643 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:46:50.383566 kubelet[2643]: I0310 01:46:50.383418 2643 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:46:50.392930 kubelet[2643]: I0310 01:46:50.392598 2643 server.go:1257] "Started kubelet" Mar 10 01:46:50.399248 kubelet[2643]: I0310 01:46:50.399035 2643 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:46:50.411177 kubelet[2643]: I0310 01:46:50.411123 2643 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:46:50.414704 kubelet[2643]: I0310 01:46:50.414046 2643 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:46:50.422529 kubelet[2643]: I0310 01:46:50.415297 2643 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:46:50.422734 kubelet[2643]: I0310 01:46:50.422607 2643 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:46:50.428163 kubelet[2643]: I0310 01:46:50.427824 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:46:50.430143 kubelet[2643]: I0310 01:46:50.430100 2643 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:46:50.435343 kubelet[2643]: I0310 01:46:50.434484 2643 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:46:50.438214 kubelet[2643]: I0310 01:46:50.437445 2643 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:46:50.438214 kubelet[2643]: E0310 01:46:50.437736 2643 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-p0r5l.gb1.brightbox.com\" not found" Mar 10 01:46:50.444143 kubelet[2643]: I0310 01:46:50.443197 2643 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:46:50.444143 kubelet[2643]: I0310 01:46:50.443363 2643 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:46:50.447466 kubelet[2643]: I0310 01:46:50.446797 2643 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:46:50.447466 kubelet[2643]: I0310 01:46:50.446954 2643 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:46:50.465850 kubelet[2643]: I0310 01:46:50.465805 2643 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:46:50.491475 kubelet[2643]: I0310 01:46:50.491299 2643 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:46:50.491475 kubelet[2643]: I0310 01:46:50.491340 2643 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:46:50.491475 kubelet[2643]: I0310 01:46:50.491367 2643 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:46:50.491475 kubelet[2643]: E0310 01:46:50.491436 2643 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.549966 2643 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.549989 2643 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550023 2643 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550194 2643 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550221 2643 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550247 2643 policy_none.go:50] "Start" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550277 2643 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:46:50.550571 kubelet[2643]: I0310 01:46:50.550296 2643 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:46:50.554618 kubelet[2643]: I0310 01:46:50.554595 2643 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:46:50.554734 kubelet[2643]: I0310 01:46:50.554716 2643 policy_none.go:44] "Start" Mar 10 01:46:50.565802 kubelet[2643]: E0310 01:46:50.564494 2643 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:46:50.565802 kubelet[2643]: I0310 01:46:50.564776 2643 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:46:50.565802 kubelet[2643]: I0310 01:46:50.564804 2643 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:46:50.570485 kubelet[2643]: E0310 01:46:50.570459 2643 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:46:50.573911 kubelet[2643]: I0310 01:46:50.571200 2643 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:46:50.595424 kubelet[2643]: I0310 01:46:50.594791 2643 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.597622 kubelet[2643]: I0310 01:46:50.597542 2643 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.600818 kubelet[2643]: I0310 01:46:50.600794 2643 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.614745 kubelet[2643]: I0310 01:46:50.614709 2643 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 01:46:50.616439 kubelet[2643]: I0310 01:46:50.615786 2643 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 01:46:50.620088 kubelet[2643]: I0310 01:46:50.619960 2643 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 01:46:50.645343 kubelet[2643]: I0310 01:46:50.645245 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645343 kubelet[2643]: I0310 01:46:50.645302 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-ca-certs\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645558 kubelet[2643]: I0310 01:46:50.645360 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acfe51b7910c122b5fa76acae247b564-kubeconfig\") pod \"kube-scheduler-srv-p0r5l.gb1.brightbox.com\" (UID: \"acfe51b7910c122b5fa76acae247b564\") " pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645558 kubelet[2643]: I0310 01:46:50.645397 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-ca-certs\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645558 kubelet[2643]: I0310 01:46:50.645425 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-k8s-certs\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645558 kubelet[2643]: I0310 01:46:50.645459 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348573be95c5522c47c31fb58b9442f0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" (UID: \"348573be95c5522c47c31fb58b9442f0\") " pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645558 kubelet[2643]: I0310 01:46:50.645484 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-flexvolume-dir\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645789 kubelet[2643]: I0310 01:46:50.645510 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-k8s-certs\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.645789 kubelet[2643]: I0310 01:46:50.645558 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/585a9b5dbae7ab7fc8ae3af1f453dd78-kubeconfig\") pod \"kube-controller-manager-srv-p0r5l.gb1.brightbox.com\" (UID: \"585a9b5dbae7ab7fc8ae3af1f453dd78\") " pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.689802 kubelet[2643]: I0310 01:46:50.689746 2643 kubelet_node_status.go:74] "Attempting to register node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.700979 kubelet[2643]: I0310 01:46:50.700948 2643 kubelet_node_status.go:123] "Node was previously registered" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:50.701105 kubelet[2643]: I0310 01:46:50.701048 2643 kubelet_node_status.go:77] "Successfully registered node" node="srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:51.378728 kubelet[2643]: I0310 01:46:51.378361 2643 apiserver.go:52] "Watching apiserver" Mar 10 01:46:51.444723 kubelet[2643]: I0310 01:46:51.444632 2643 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:46:51.525183 kubelet[2643]: I0310 01:46:51.524938 2643 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:51.525829 kubelet[2643]: I0310 01:46:51.525717 2643 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:51.541299 kubelet[2643]: I0310 01:46:51.539516 2643 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 01:46:51.541299 kubelet[2643]: E0310 01:46:51.539634 2643 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-p0r5l.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:51.541528 kubelet[2643]: I0310 01:46:51.539330 2643 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 01:46:51.541670 kubelet[2643]: E0310 01:46:51.541651 2643 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-p0r5l.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" Mar 10 01:46:52.784402 kubelet[2643]: I0310 01:46:52.783974 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-p0r5l.gb1.brightbox.com" podStartSLOduration=2.78392889 podStartE2EDuration="2.78392889s" podCreationTimestamp="2026-03-10 01:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:46:52.777177305 +0000 UTC m=+2.507976357" watchObservedRunningTime="2026-03-10 01:46:52.78392889 +0000 UTC m=+2.514727953" Mar 10 01:46:52.821908 kubelet[2643]: I0310 01:46:52.821615 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-p0r5l.gb1.brightbox.com" podStartSLOduration=2.821600063 podStartE2EDuration="2.821600063s" podCreationTimestamp="2026-03-10 01:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:46:52.797627372 +0000 UTC m=+2.528426429" watchObservedRunningTime="2026-03-10 01:46:52.821600063 +0000 UTC m=+2.552399122" Mar 10 01:46:53.541053 kubelet[2643]: I0310 01:46:53.540700 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-p0r5l.gb1.brightbox.com" podStartSLOduration=3.540683315 podStartE2EDuration="3.540683315s" podCreationTimestamp="2026-03-10 01:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:46:52.821881655 +0000 UTC m=+2.552680714" watchObservedRunningTime="2026-03-10 01:46:53.540683315 +0000 UTC m=+3.271482374" Mar 10 01:46:55.656428 kubelet[2643]: I0310 01:46:55.655359 2643 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:46:55.659429 containerd[1511]: time="2026-03-10T01:46:55.659350032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:46:55.661039 kubelet[2643]: I0310 01:46:55.660431 2643 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:46:56.260909 update_engine[1486]: I20260310 01:46:56.260622 1486 update_attempter.cc:509] Updating boot flags... Mar 10 01:46:56.339917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2703) Mar 10 01:46:56.453148 systemd[1]: Created slice kubepods-besteffort-pod0c859369_6139_40a2_878c_0a06c7b6fa97.slice - libcontainer container kubepods-besteffort-pod0c859369_6139_40a2_878c_0a06c7b6fa97.slice. Mar 10 01:46:56.488357 kubelet[2643]: I0310 01:46:56.488174 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c859369-6139-40a2-878c-0a06c7b6fa97-xtables-lock\") pod \"kube-proxy-d2tp6\" (UID: \"0c859369-6139-40a2-878c-0a06c7b6fa97\") " pod="kube-system/kube-proxy-d2tp6" Mar 10 01:46:56.488357 kubelet[2643]: I0310 01:46:56.488234 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c859369-6139-40a2-878c-0a06c7b6fa97-kube-proxy\") pod \"kube-proxy-d2tp6\" (UID: \"0c859369-6139-40a2-878c-0a06c7b6fa97\") " pod="kube-system/kube-proxy-d2tp6" Mar 10 01:46:56.488357 kubelet[2643]: I0310 01:46:56.488267 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c859369-6139-40a2-878c-0a06c7b6fa97-lib-modules\") pod \"kube-proxy-d2tp6\" (UID: \"0c859369-6139-40a2-878c-0a06c7b6fa97\") " pod="kube-system/kube-proxy-d2tp6" Mar 10 01:46:56.488357 kubelet[2643]: I0310 01:46:56.488292 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjhl\" (UniqueName: \"kubernetes.io/projected/0c859369-6139-40a2-878c-0a06c7b6fa97-kube-api-access-svjhl\") pod \"kube-proxy-d2tp6\" (UID: \"0c859369-6139-40a2-878c-0a06c7b6fa97\") " pod="kube-system/kube-proxy-d2tp6" Mar 10 01:46:56.515244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2706) Mar 10 01:46:56.606629 kubelet[2643]: E0310 01:46:56.606575 2643 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 10 01:46:56.606629 kubelet[2643]: E0310 01:46:56.606622 2643 projected.go:196] Error preparing data for projected volume kube-api-access-svjhl for pod kube-system/kube-proxy-d2tp6: configmap "kube-root-ca.crt" not found Mar 10 01:46:56.606871 kubelet[2643]: E0310 01:46:56.606741 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c859369-6139-40a2-878c-0a06c7b6fa97-kube-api-access-svjhl podName:0c859369-6139-40a2-878c-0a06c7b6fa97 nodeName:}" failed. No retries permitted until 2026-03-10 01:46:57.106698713 +0000 UTC m=+6.837497771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-svjhl" (UniqueName: "kubernetes.io/projected/0c859369-6139-40a2-878c-0a06c7b6fa97-kube-api-access-svjhl") pod "kube-proxy-d2tp6" (UID: "0c859369-6139-40a2-878c-0a06c7b6fa97") : configmap "kube-root-ca.crt" not found Mar 10 01:46:56.922846 systemd[1]: Created slice kubepods-besteffort-poddc8bbe92_2f69_4910_9026_e269bea6ec3f.slice - libcontainer container kubepods-besteffort-poddc8bbe92_2f69_4910_9026_e269bea6ec3f.slice. Mar 10 01:46:56.992941 kubelet[2643]: I0310 01:46:56.992887 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc8bbe92-2f69-4910-9026-e269bea6ec3f-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-8zntc\" (UID: \"dc8bbe92-2f69-4910-9026-e269bea6ec3f\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8zntc" Mar 10 01:46:56.992941 kubelet[2643]: I0310 01:46:56.992950 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrnb\" (UniqueName: \"kubernetes.io/projected/dc8bbe92-2f69-4910-9026-e269bea6ec3f-kube-api-access-jlrnb\") pod \"tigera-operator-6cf4cccc57-8zntc\" (UID: \"dc8bbe92-2f69-4910-9026-e269bea6ec3f\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8zntc" Mar 10 01:46:57.234492 containerd[1511]: time="2026-03-10T01:46:57.234302068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8zntc,Uid:dc8bbe92-2f69-4910-9026-e269bea6ec3f,Namespace:tigera-operator,Attempt:0,}" Mar 10 01:46:57.269994 containerd[1511]: time="2026-03-10T01:46:57.269781576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:46:57.269994 containerd[1511]: time="2026-03-10T01:46:57.269900236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:46:57.269994 containerd[1511]: time="2026-03-10T01:46:57.269940231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:57.271722 containerd[1511]: time="2026-03-10T01:46:57.270083671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:57.308731 systemd[1]: Started cri-containerd-2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885.scope - libcontainer container 2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885. Mar 10 01:46:57.375934 containerd[1511]: time="2026-03-10T01:46:57.375770125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8zntc,Uid:dc8bbe92-2f69-4910-9026-e269bea6ec3f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885\"" Mar 10 01:46:57.377448 containerd[1511]: time="2026-03-10T01:46:57.377053049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2tp6,Uid:0c859369-6139-40a2-878c-0a06c7b6fa97,Namespace:kube-system,Attempt:0,}" Mar 10 01:46:57.379261 containerd[1511]: time="2026-03-10T01:46:57.379226187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 01:46:57.407481 containerd[1511]: time="2026-03-10T01:46:57.407353630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:46:57.407788 containerd[1511]: time="2026-03-10T01:46:57.407508789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:46:57.407788 containerd[1511]: time="2026-03-10T01:46:57.407597554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:57.407788 containerd[1511]: time="2026-03-10T01:46:57.407760949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:46:57.435893 systemd[1]: Started cri-containerd-cc215ab9faf9c64a14dcdf4555891810f0c45b1095ef5290db1e3e3759d9293b.scope - libcontainer container cc215ab9faf9c64a14dcdf4555891810f0c45b1095ef5290db1e3e3759d9293b. Mar 10 01:46:57.477225 containerd[1511]: time="2026-03-10T01:46:57.477142544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2tp6,Uid:0c859369-6139-40a2-878c-0a06c7b6fa97,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc215ab9faf9c64a14dcdf4555891810f0c45b1095ef5290db1e3e3759d9293b\"" Mar 10 01:46:57.485562 containerd[1511]: time="2026-03-10T01:46:57.485170185Z" level=info msg="CreateContainer within sandbox \"cc215ab9faf9c64a14dcdf4555891810f0c45b1095ef5290db1e3e3759d9293b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:46:57.499926 containerd[1511]: time="2026-03-10T01:46:57.499880500Z" level=info msg="CreateContainer within sandbox \"cc215ab9faf9c64a14dcdf4555891810f0c45b1095ef5290db1e3e3759d9293b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84c03ff093dd08bcb8c6d78b0b34f797cdb1186f2273835ea8563d674aed5442\"" Mar 10 01:46:57.501993 containerd[1511]: time="2026-03-10T01:46:57.501964182Z" level=info msg="StartContainer for \"84c03ff093dd08bcb8c6d78b0b34f797cdb1186f2273835ea8563d674aed5442\"" Mar 10 01:46:57.541895 systemd[1]: Started cri-containerd-84c03ff093dd08bcb8c6d78b0b34f797cdb1186f2273835ea8563d674aed5442.scope - libcontainer container 84c03ff093dd08bcb8c6d78b0b34f797cdb1186f2273835ea8563d674aed5442. Mar 10 01:46:57.592166 containerd[1511]: time="2026-03-10T01:46:57.591757477Z" level=info msg="StartContainer for \"84c03ff093dd08bcb8c6d78b0b34f797cdb1186f2273835ea8563d674aed5442\" returns successfully" Mar 10 01:46:59.276346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622175723.mount: Deactivated successfully. Mar 10 01:47:00.880570 containerd[1511]: time="2026-03-10T01:47:00.880369661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:00.882240 containerd[1511]: time="2026-03-10T01:47:00.881815669Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 10 01:47:00.882240 containerd[1511]: time="2026-03-10T01:47:00.882188450Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:00.885112 containerd[1511]: time="2026-03-10T01:47:00.885016433Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:00.886839 containerd[1511]: time="2026-03-10T01:47:00.886206158Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.506938892s" Mar 10 01:47:00.886839 containerd[1511]: time="2026-03-10T01:47:00.886247986Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 10 01:47:00.894622 containerd[1511]: time="2026-03-10T01:47:00.894348390Z" level=info msg="CreateContainer within sandbox \"2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 01:47:00.908358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77581259.mount: Deactivated successfully. Mar 10 01:47:00.912065 containerd[1511]: time="2026-03-10T01:47:00.910827415Z" level=info msg="CreateContainer within sandbox \"2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303\"" Mar 10 01:47:00.914209 containerd[1511]: time="2026-03-10T01:47:00.912301882Z" level=info msg="StartContainer for \"b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303\"" Mar 10 01:47:00.948841 systemd[1]: Started cri-containerd-b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303.scope - libcontainer container b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303. Mar 10 01:47:00.986298 containerd[1511]: time="2026-03-10T01:47:00.986223448Z" level=info msg="StartContainer for \"b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303\" returns successfully" Mar 10 01:47:01.581313 kubelet[2643]: I0310 01:47:01.580714 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-d2tp6" podStartSLOduration=5.5806968139999995 podStartE2EDuration="5.580696814s" podCreationTimestamp="2026-03-10 01:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:46:58.568732474 +0000 UTC m=+8.299531529" watchObservedRunningTime="2026-03-10 01:47:01.580696814 +0000 UTC m=+11.311495867" Mar 10 01:47:02.795795 kubelet[2643]: I0310 01:47:02.795663 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-8zntc" podStartSLOduration=3.285185335 podStartE2EDuration="6.794498037s" podCreationTimestamp="2026-03-10 01:46:56 +0000 UTC" firstStartedPulling="2026-03-10 01:46:57.378759239 +0000 UTC m=+7.109558283" lastFinishedPulling="2026-03-10 01:47:00.888071941 +0000 UTC m=+10.618870985" observedRunningTime="2026-03-10 01:47:01.580973841 +0000 UTC m=+11.311772889" watchObservedRunningTime="2026-03-10 01:47:02.794498037 +0000 UTC m=+12.525297101" Mar 10 01:47:04.410116 systemd[1]: cri-containerd-b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303.scope: Deactivated successfully. Mar 10 01:47:04.460285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303-rootfs.mount: Deactivated successfully. Mar 10 01:47:04.470610 containerd[1511]: time="2026-03-10T01:47:04.464883973Z" level=info msg="shim disconnected" id=b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303 namespace=k8s.io Mar 10 01:47:04.471146 containerd[1511]: time="2026-03-10T01:47:04.470633857Z" level=warning msg="cleaning up after shim disconnected" id=b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303 namespace=k8s.io Mar 10 01:47:04.471146 containerd[1511]: time="2026-03-10T01:47:04.470659949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:47:04.574758 kubelet[2643]: I0310 01:47:04.574720 2643 scope.go:122] "RemoveContainer" containerID="b07508682967729e8926b7ad27987aaeeb945fe31876912e20a08fe2be614303" Mar 10 01:47:04.580020 containerd[1511]: time="2026-03-10T01:47:04.579975958Z" level=info msg="CreateContainer within sandbox \"2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 10 01:47:04.600491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830378207.mount: Deactivated successfully. Mar 10 01:47:04.602410 containerd[1511]: time="2026-03-10T01:47:04.602357002Z" level=info msg="CreateContainer within sandbox \"2c0674b2288f7e8b6124f8a2d22e1f83f598895819a40a65d60c64660225d885\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9e5e0c6abcea15bf348cd89deaf28483ec1f20bac5fd842b6c714ac3d1ca6897\"" Mar 10 01:47:04.604162 containerd[1511]: time="2026-03-10T01:47:04.604131306Z" level=info msg="StartContainer for \"9e5e0c6abcea15bf348cd89deaf28483ec1f20bac5fd842b6c714ac3d1ca6897\"" Mar 10 01:47:04.681846 systemd[1]: Started cri-containerd-9e5e0c6abcea15bf348cd89deaf28483ec1f20bac5fd842b6c714ac3d1ca6897.scope - libcontainer container 9e5e0c6abcea15bf348cd89deaf28483ec1f20bac5fd842b6c714ac3d1ca6897. Mar 10 01:47:04.815909 containerd[1511]: time="2026-03-10T01:47:04.815790949Z" level=info msg="StartContainer for \"9e5e0c6abcea15bf348cd89deaf28483ec1f20bac5fd842b6c714ac3d1ca6897\" returns successfully" Mar 10 01:47:08.012345 sudo[1742]: pam_unix(sudo:session): session closed for user root Mar 10 01:47:08.104187 sshd[1727]: pam_unix(sshd:session): session closed for user core Mar 10 01:47:08.110502 systemd[1]: sshd@6-10.230.66.170:22-68.220.241.50:49124.service: Deactivated successfully. Mar 10 01:47:08.114769 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:47:08.115983 systemd[1]: session-9.scope: Consumed 5.005s CPU time, 156.3M memory peak, 0B memory swap peak. Mar 10 01:47:08.119381 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:47:08.122765 systemd-logind[1485]: Removed session 9. Mar 10 01:47:09.339830 systemd[1]: Started sshd@7-10.230.66.170:22-159.65.30.95:43012.service - OpenSSH per-connection server daemon (159.65.30.95:43012). Mar 10 01:47:09.504170 sshd[3115]: Connection closed by authenticating user root 159.65.30.95 port 43012 [preauth] Mar 10 01:47:09.507917 systemd[1]: sshd@7-10.230.66.170:22-159.65.30.95:43012.service: Deactivated successfully. Mar 10 01:47:11.947007 systemd[1]: Created slice kubepods-besteffort-pod8e84991c_9619_42f0_8751_878061a9515a.slice - libcontainer container kubepods-besteffort-pod8e84991c_9619_42f0_8751_878061a9515a.slice. Mar 10 01:47:12.011324 kubelet[2643]: I0310 01:47:12.011247 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e84991c-9619-42f0-8751-878061a9515a-tigera-ca-bundle\") pod \"calico-typha-57fb89f5c-g5kn7\" (UID: \"8e84991c-9619-42f0-8751-878061a9515a\") " pod="calico-system/calico-typha-57fb89f5c-g5kn7" Mar 10 01:47:12.011978 kubelet[2643]: I0310 01:47:12.011332 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e84991c-9619-42f0-8751-878061a9515a-typha-certs\") pod \"calico-typha-57fb89f5c-g5kn7\" (UID: \"8e84991c-9619-42f0-8751-878061a9515a\") " pod="calico-system/calico-typha-57fb89f5c-g5kn7" Mar 10 01:47:12.011978 kubelet[2643]: I0310 01:47:12.011363 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdjl\" (UniqueName: \"kubernetes.io/projected/8e84991c-9619-42f0-8751-878061a9515a-kube-api-access-zzdjl\") pod \"calico-typha-57fb89f5c-g5kn7\" (UID: \"8e84991c-9619-42f0-8751-878061a9515a\") " pod="calico-system/calico-typha-57fb89f5c-g5kn7" Mar 10 01:47:12.067012 systemd[1]: Created slice kubepods-besteffort-pod75d005ab_247b_4e98_b84a_65f84ac21dc0.slice - libcontainer container kubepods-besteffort-pod75d005ab_247b_4e98_b84a_65f84ac21dc0.slice. Mar 10 01:47:12.111792 kubelet[2643]: I0310 01:47:12.111745 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-var-run-calico\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.111968 kubelet[2643]: I0310 01:47:12.111803 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-flexvol-driver-host\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.111968 kubelet[2643]: I0310 01:47:12.111831 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-policysync\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.111968 kubelet[2643]: I0310 01:47:12.111860 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-sys-fs\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.111968 kubelet[2643]: I0310 01:47:12.111885 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-cni-bin-dir\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.111968 kubelet[2643]: I0310 01:47:12.111920 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-xtables-lock\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112180 kubelet[2643]: I0310 01:47:12.111965 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-nodeproc\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112180 kubelet[2643]: I0310 01:47:12.111993 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d005ab-247b-4e98-b84a-65f84ac21dc0-tigera-ca-bundle\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112180 kubelet[2643]: I0310 01:47:12.112017 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6k8\" (UniqueName: \"kubernetes.io/projected/75d005ab-247b-4e98-b84a-65f84ac21dc0-kube-api-access-7j6k8\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112180 kubelet[2643]: I0310 01:47:12.112043 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-cni-log-dir\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112180 kubelet[2643]: I0310 01:47:12.112066 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-cni-net-dir\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112371 kubelet[2643]: I0310 01:47:12.112098 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-bpffs\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112371 kubelet[2643]: I0310 01:47:12.112148 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-var-lib-calico\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112371 kubelet[2643]: I0310 01:47:12.112190 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d005ab-247b-4e98-b84a-65f84ac21dc0-lib-modules\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.112371 kubelet[2643]: I0310 01:47:12.112218 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75d005ab-247b-4e98-b84a-65f84ac21dc0-node-certs\") pod \"calico-node-94n98\" (UID: \"75d005ab-247b-4e98-b84a-65f84ac21dc0\") " pod="calico-system/calico-node-94n98" Mar 10 01:47:12.168989 kubelet[2643]: E0310 01:47:12.168362 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:12.213096 kubelet[2643]: I0310 01:47:12.212651 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dccb67c-d463-4240-ae02-985ee84c0680-registration-dir\") pod \"csi-node-driver-vlwxn\" (UID: \"4dccb67c-d463-4240-ae02-985ee84c0680\") " pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:12.213244 kubelet[2643]: I0310 01:47:12.213195 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp87s\" (UniqueName: \"kubernetes.io/projected/4dccb67c-d463-4240-ae02-985ee84c0680-kube-api-access-dp87s\") pod \"csi-node-driver-vlwxn\" (UID: \"4dccb67c-d463-4240-ae02-985ee84c0680\") " pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:12.215390 kubelet[2643]: I0310 01:47:12.213317 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4dccb67c-d463-4240-ae02-985ee84c0680-varrun\") pod \"csi-node-driver-vlwxn\" (UID: \"4dccb67c-d463-4240-ae02-985ee84c0680\") " pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:12.215390 kubelet[2643]: I0310 01:47:12.214679 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dccb67c-d463-4240-ae02-985ee84c0680-socket-dir\") pod \"csi-node-driver-vlwxn\" (UID: \"4dccb67c-d463-4240-ae02-985ee84c0680\") " pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:12.215390 kubelet[2643]: I0310 01:47:12.214825 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dccb67c-d463-4240-ae02-985ee84c0680-kubelet-dir\") pod \"csi-node-driver-vlwxn\" (UID: \"4dccb67c-d463-4240-ae02-985ee84c0680\") " pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:12.222332 kubelet[2643]: E0310 01:47:12.222288 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.222332 kubelet[2643]: W0310 01:47:12.222331 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.222480 kubelet[2643]: E0310 01:47:12.222387 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.227766 kubelet[2643]: E0310 01:47:12.227741 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.227987 kubelet[2643]: W0310 01:47:12.227958 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.228199 kubelet[2643]: E0310 01:47:12.228177 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.257115 containerd[1511]: time="2026-03-10T01:47:12.257067386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57fb89f5c-g5kn7,Uid:8e84991c-9619-42f0-8751-878061a9515a,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:12.277023 kubelet[2643]: E0310 01:47:12.275805 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.277023 kubelet[2643]: W0310 01:47:12.275831 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.277023 kubelet[2643]: E0310 01:47:12.275858 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.316919 kubelet[2643]: E0310 01:47:12.316451 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.316919 kubelet[2643]: W0310 01:47:12.316480 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.316919 kubelet[2643]: E0310 01:47:12.316506 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.318249 kubelet[2643]: E0310 01:47:12.317935 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.318249 kubelet[2643]: W0310 01:47:12.317948 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.318249 kubelet[2643]: E0310 01:47:12.317979 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.318953 kubelet[2643]: E0310 01:47:12.318561 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.318953 kubelet[2643]: W0310 01:47:12.318590 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.318953 kubelet[2643]: E0310 01:47:12.318609 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.322830 kubelet[2643]: E0310 01:47:12.322806 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.322830 kubelet[2643]: W0310 01:47:12.322828 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.323044 kubelet[2643]: E0310 01:47:12.322845 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.323172 kubelet[2643]: E0310 01:47:12.323151 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.323172 kubelet[2643]: W0310 01:47:12.323169 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.323324 kubelet[2643]: E0310 01:47:12.323184 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.323591 kubelet[2643]: E0310 01:47:12.323465 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.323591 kubelet[2643]: W0310 01:47:12.323491 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.323591 kubelet[2643]: E0310 01:47:12.323507 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.324637 kubelet[2643]: E0310 01:47:12.323835 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.324637 kubelet[2643]: W0310 01:47:12.323848 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.324637 kubelet[2643]: E0310 01:47:12.323862 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.324637 kubelet[2643]: E0310 01:47:12.324189 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.324637 kubelet[2643]: W0310 01:47:12.324202 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.324637 kubelet[2643]: E0310 01:47:12.324214 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.324637 kubelet[2643]: E0310 01:47:12.324612 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.324637 kubelet[2643]: W0310 01:47:12.324636 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.324955 kubelet[2643]: E0310 01:47:12.324652 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.327425 kubelet[2643]: E0310 01:47:12.327124 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.327425 kubelet[2643]: W0310 01:47:12.327142 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.327425 kubelet[2643]: E0310 01:47:12.327158 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.328771 kubelet[2643]: E0310 01:47:12.328750 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.328771 kubelet[2643]: W0310 01:47:12.328770 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.328902 kubelet[2643]: E0310 01:47:12.328787 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.329099 kubelet[2643]: E0310 01:47:12.329075 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.329099 kubelet[2643]: W0310 01:47:12.329094 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.329226 kubelet[2643]: E0310 01:47:12.329110 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.331817 kubelet[2643]: E0310 01:47:12.331793 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.331817 kubelet[2643]: W0310 01:47:12.331813 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.331941 kubelet[2643]: E0310 01:47:12.331830 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.332111 kubelet[2643]: E0310 01:47:12.332092 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.332111 kubelet[2643]: W0310 01:47:12.332109 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.332223 kubelet[2643]: E0310 01:47:12.332125 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.332685 kubelet[2643]: E0310 01:47:12.332666 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.332685 kubelet[2643]: W0310 01:47:12.332684 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.332798 kubelet[2643]: E0310 01:47:12.332700 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.333267 kubelet[2643]: E0310 01:47:12.333248 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.333267 kubelet[2643]: W0310 01:47:12.333265 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.333607 kubelet[2643]: E0310 01:47:12.333281 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.334697 kubelet[2643]: E0310 01:47:12.334677 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.334806 kubelet[2643]: W0310 01:47:12.334698 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.334806 kubelet[2643]: E0310 01:47:12.334715 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.335129 kubelet[2643]: E0310 01:47:12.335108 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.335129 kubelet[2643]: W0310 01:47:12.335127 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.335262 kubelet[2643]: E0310 01:47:12.335143 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.335581 kubelet[2643]: E0310 01:47:12.335559 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.335683 kubelet[2643]: W0310 01:47:12.335586 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.335683 kubelet[2643]: E0310 01:47:12.335602 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.336142 kubelet[2643]: E0310 01:47:12.336111 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.336142 kubelet[2643]: W0310 01:47:12.336140 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.336242 kubelet[2643]: E0310 01:47:12.336156 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.337949 kubelet[2643]: E0310 01:47:12.337878 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.337949 kubelet[2643]: W0310 01:47:12.337906 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.337949 kubelet[2643]: E0310 01:47:12.337925 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.340839 kubelet[2643]: E0310 01:47:12.340811 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.340839 kubelet[2643]: W0310 01:47:12.340833 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.341054 kubelet[2643]: E0310 01:47:12.340850 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.342667 kubelet[2643]: E0310 01:47:12.342644 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.342667 kubelet[2643]: W0310 01:47:12.342665 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.342789 kubelet[2643]: E0310 01:47:12.342682 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.343030 kubelet[2643]: E0310 01:47:12.343010 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.343030 kubelet[2643]: W0310 01:47:12.343029 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.343125 kubelet[2643]: E0310 01:47:12.343045 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.343397 kubelet[2643]: E0310 01:47:12.343351 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.343397 kubelet[2643]: W0310 01:47:12.343370 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.343397 kubelet[2643]: E0310 01:47:12.343397 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.359304 containerd[1511]: time="2026-03-10T01:47:12.357259525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:12.359304 containerd[1511]: time="2026-03-10T01:47:12.357643341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:12.359304 containerd[1511]: time="2026-03-10T01:47:12.357710934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:12.359304 containerd[1511]: time="2026-03-10T01:47:12.358245475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:12.366660 kubelet[2643]: E0310 01:47:12.366496 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:12.366660 kubelet[2643]: W0310 01:47:12.366545 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:12.366660 kubelet[2643]: E0310 01:47:12.366587 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:12.390462 containerd[1511]: time="2026-03-10T01:47:12.390341449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-94n98,Uid:75d005ab-247b-4e98-b84a-65f84ac21dc0,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:12.434674 systemd[1]: Started cri-containerd-9bb39d8006b7ad69ebd73b5892cbe01a87a0590407704b2275a55bfa03c2dc54.scope - libcontainer container 9bb39d8006b7ad69ebd73b5892cbe01a87a0590407704b2275a55bfa03c2dc54. Mar 10 01:47:12.478695 containerd[1511]: time="2026-03-10T01:47:12.477800494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:12.478695 containerd[1511]: time="2026-03-10T01:47:12.477951328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:12.478695 containerd[1511]: time="2026-03-10T01:47:12.477971891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:12.482687 containerd[1511]: time="2026-03-10T01:47:12.479684927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:12.517767 systemd[1]: Started cri-containerd-9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e.scope - libcontainer container 9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e. Mar 10 01:47:12.569768 containerd[1511]: time="2026-03-10T01:47:12.569633309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57fb89f5c-g5kn7,Uid:8e84991c-9619-42f0-8751-878061a9515a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bb39d8006b7ad69ebd73b5892cbe01a87a0590407704b2275a55bfa03c2dc54\"" Mar 10 01:47:12.573093 containerd[1511]: time="2026-03-10T01:47:12.572751979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 01:47:12.588496 containerd[1511]: time="2026-03-10T01:47:12.588456990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-94n98,Uid:75d005ab-247b-4e98-b84a-65f84ac21dc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\"" Mar 10 01:47:13.493075 kubelet[2643]: E0310 01:47:13.492470 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:14.194057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596876451.mount: Deactivated successfully. Mar 10 01:47:15.492981 kubelet[2643]: E0310 01:47:15.492361 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:16.548827 containerd[1511]: time="2026-03-10T01:47:16.548768566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:16.550784 containerd[1511]: time="2026-03-10T01:47:16.550707461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 10 01:47:16.552248 containerd[1511]: time="2026-03-10T01:47:16.552183079Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:16.554968 containerd[1511]: time="2026-03-10T01:47:16.554900102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:16.562167 containerd[1511]: time="2026-03-10T01:47:16.561688095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.988873077s" Mar 10 01:47:16.562167 containerd[1511]: time="2026-03-10T01:47:16.561739929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 10 01:47:16.563617 containerd[1511]: time="2026-03-10T01:47:16.563492806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 01:47:16.591462 containerd[1511]: time="2026-03-10T01:47:16.591379136Z" level=info msg="CreateContainer within sandbox \"9bb39d8006b7ad69ebd73b5892cbe01a87a0590407704b2275a55bfa03c2dc54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 01:47:16.611285 containerd[1511]: time="2026-03-10T01:47:16.611140168Z" level=info msg="CreateContainer within sandbox \"9bb39d8006b7ad69ebd73b5892cbe01a87a0590407704b2275a55bfa03c2dc54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f2929b1c10ebc146bbc22ac88c6224d51cdb427945995d2d07e81c18d3214446\"" Mar 10 01:47:16.612639 containerd[1511]: time="2026-03-10T01:47:16.612040520Z" level=info msg="StartContainer for \"f2929b1c10ebc146bbc22ac88c6224d51cdb427945995d2d07e81c18d3214446\"" Mar 10 01:47:16.692758 systemd[1]: Started cri-containerd-f2929b1c10ebc146bbc22ac88c6224d51cdb427945995d2d07e81c18d3214446.scope - libcontainer container f2929b1c10ebc146bbc22ac88c6224d51cdb427945995d2d07e81c18d3214446. Mar 10 01:47:16.756912 containerd[1511]: time="2026-03-10T01:47:16.756854067Z" level=info msg="StartContainer for \"f2929b1c10ebc146bbc22ac88c6224d51cdb427945995d2d07e81c18d3214446\" returns successfully" Mar 10 01:47:17.492409 kubelet[2643]: E0310 01:47:17.492254 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:17.729390 kubelet[2643]: I0310 01:47:17.728695 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-57fb89f5c-g5kn7" podStartSLOduration=2.737949126 podStartE2EDuration="6.728681671s" podCreationTimestamp="2026-03-10 01:47:11 +0000 UTC" firstStartedPulling="2026-03-10 01:47:12.57224325 +0000 UTC m=+22.303042300" lastFinishedPulling="2026-03-10 01:47:16.5629758 +0000 UTC m=+26.293774845" observedRunningTime="2026-03-10 01:47:17.727231242 +0000 UTC m=+27.458030294" watchObservedRunningTime="2026-03-10 01:47:17.728681671 +0000 UTC m=+27.459480725" Mar 10 01:47:17.730738 kubelet[2643]: E0310 01:47:17.729384 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.730738 kubelet[2643]: W0310 01:47:17.729971 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.730738 kubelet[2643]: E0310 01:47:17.730009 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.730738 kubelet[2643]: E0310 01:47:17.730582 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.730738 kubelet[2643]: W0310 01:47:17.730597 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.730738 kubelet[2643]: E0310 01:47:17.730653 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.731951 kubelet[2643]: E0310 01:47:17.731273 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.731951 kubelet[2643]: W0310 01:47:17.731321 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.731951 kubelet[2643]: E0310 01:47:17.731340 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.732785 kubelet[2643]: E0310 01:47:17.732563 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.732785 kubelet[2643]: W0310 01:47:17.732583 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.732785 kubelet[2643]: E0310 01:47:17.732600 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.733200 kubelet[2643]: E0310 01:47:17.732967 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.733200 kubelet[2643]: W0310 01:47:17.732982 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.733200 kubelet[2643]: E0310 01:47:17.732997 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.733566 kubelet[2643]: E0310 01:47:17.733386 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.733566 kubelet[2643]: W0310 01:47:17.733407 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.733566 kubelet[2643]: E0310 01:47:17.733423 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.734235 kubelet[2643]: E0310 01:47:17.733890 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.734235 kubelet[2643]: W0310 01:47:17.733904 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.734235 kubelet[2643]: E0310 01:47:17.733932 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.734765 kubelet[2643]: E0310 01:47:17.734308 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.734765 kubelet[2643]: W0310 01:47:17.734322 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.734765 kubelet[2643]: E0310 01:47:17.734356 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.735719 kubelet[2643]: E0310 01:47:17.734957 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.735719 kubelet[2643]: W0310 01:47:17.734971 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.735719 kubelet[2643]: E0310 01:47:17.734986 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.735719 kubelet[2643]: E0310 01:47:17.735491 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.735719 kubelet[2643]: W0310 01:47:17.735503 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.735719 kubelet[2643]: E0310 01:47:17.735517 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.736189 kubelet[2643]: E0310 01:47:17.735962 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.736189 kubelet[2643]: W0310 01:47:17.735986 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.736189 kubelet[2643]: E0310 01:47:17.736001 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.736915 kubelet[2643]: E0310 01:47:17.736316 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.736915 kubelet[2643]: W0310 01:47:17.736343 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.736915 kubelet[2643]: E0310 01:47:17.736358 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.737370 kubelet[2643]: E0310 01:47:17.737249 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.737370 kubelet[2643]: W0310 01:47:17.737281 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.737370 kubelet[2643]: E0310 01:47:17.737295 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.737746 kubelet[2643]: E0310 01:47:17.737641 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.737746 kubelet[2643]: W0310 01:47:17.737655 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.737746 kubelet[2643]: E0310 01:47:17.737669 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.738010 kubelet[2643]: E0310 01:47:17.737973 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.738010 kubelet[2643]: W0310 01:47:17.737987 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.738010 kubelet[2643]: E0310 01:47:17.738001 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.768234 kubelet[2643]: E0310 01:47:17.768093 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.770742 kubelet[2643]: W0310 01:47:17.770206 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.770742 kubelet[2643]: E0310 01:47:17.770352 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.772740 kubelet[2643]: E0310 01:47:17.772392 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.772740 kubelet[2643]: W0310 01:47:17.772430 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.772740 kubelet[2643]: E0310 01:47:17.772448 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.773433 kubelet[2643]: E0310 01:47:17.773274 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.773433 kubelet[2643]: W0310 01:47:17.773385 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.773843 kubelet[2643]: E0310 01:47:17.773405 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.774407 kubelet[2643]: E0310 01:47:17.774303 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.774407 kubelet[2643]: W0310 01:47:17.774321 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.774407 kubelet[2643]: E0310 01:47:17.774337 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.775729 kubelet[2643]: E0310 01:47:17.775483 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.775729 kubelet[2643]: W0310 01:47:17.775500 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.775729 kubelet[2643]: E0310 01:47:17.775546 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.777059 kubelet[2643]: E0310 01:47:17.776881 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.777059 kubelet[2643]: W0310 01:47:17.776901 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.777059 kubelet[2643]: E0310 01:47:17.776918 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.779005 kubelet[2643]: E0310 01:47:17.778848 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.779005 kubelet[2643]: W0310 01:47:17.778867 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.779005 kubelet[2643]: E0310 01:47:17.778884 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.779950 kubelet[2643]: E0310 01:47:17.779645 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.779950 kubelet[2643]: W0310 01:47:17.779664 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.779950 kubelet[2643]: E0310 01:47:17.779682 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.780365 kubelet[2643]: E0310 01:47:17.780183 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.780365 kubelet[2643]: W0310 01:47:17.780199 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.780365 kubelet[2643]: E0310 01:47:17.780215 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.780949 kubelet[2643]: E0310 01:47:17.780777 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.780949 kubelet[2643]: W0310 01:47:17.780794 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.780949 kubelet[2643]: E0310 01:47:17.780810 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.781925 kubelet[2643]: E0310 01:47:17.781738 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.781925 kubelet[2643]: W0310 01:47:17.781756 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.781925 kubelet[2643]: E0310 01:47:17.781772 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.782324 kubelet[2643]: E0310 01:47:17.782110 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.782324 kubelet[2643]: W0310 01:47:17.782122 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.782324 kubelet[2643]: E0310 01:47:17.782136 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.783146 kubelet[2643]: E0310 01:47:17.782923 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.783146 kubelet[2643]: W0310 01:47:17.782940 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.783146 kubelet[2643]: E0310 01:47:17.782956 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.783495 kubelet[2643]: E0310 01:47:17.783444 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.783495 kubelet[2643]: W0310 01:47:17.783460 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.783495 kubelet[2643]: E0310 01:47:17.783476 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.784506 kubelet[2643]: E0310 01:47:17.784058 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.784506 kubelet[2643]: W0310 01:47:17.784075 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.784506 kubelet[2643]: E0310 01:47:17.784090 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.785570 kubelet[2643]: E0310 01:47:17.785500 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.785570 kubelet[2643]: W0310 01:47:17.785559 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.785728 kubelet[2643]: E0310 01:47:17.785578 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.786165 kubelet[2643]: E0310 01:47:17.786132 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.786165 kubelet[2643]: W0310 01:47:17.786153 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.786270 kubelet[2643]: E0310 01:47:17.786169 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:17.786504 kubelet[2643]: E0310 01:47:17.786474 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:47:17.786504 kubelet[2643]: W0310 01:47:17.786495 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:47:17.786656 kubelet[2643]: E0310 01:47:17.786511 2643 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:47:18.234135 containerd[1511]: time="2026-03-10T01:47:18.232148982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:18.234135 containerd[1511]: time="2026-03-10T01:47:18.233483747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 10 01:47:18.236676 containerd[1511]: time="2026-03-10T01:47:18.236610021Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:18.240191 containerd[1511]: time="2026-03-10T01:47:18.240153760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:18.242566 containerd[1511]: time="2026-03-10T01:47:18.241784913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.677706658s" Mar 10 01:47:18.243580 containerd[1511]: time="2026-03-10T01:47:18.242711040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 10 01:47:18.249204 containerd[1511]: time="2026-03-10T01:47:18.249160266Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 01:47:18.274462 containerd[1511]: time="2026-03-10T01:47:18.274410908Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833\"" Mar 10 01:47:18.277601 containerd[1511]: time="2026-03-10T01:47:18.275953541Z" level=info msg="StartContainer for \"802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833\"" Mar 10 01:47:18.349803 systemd[1]: Started cri-containerd-802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833.scope - libcontainer container 802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833. Mar 10 01:47:18.398321 containerd[1511]: time="2026-03-10T01:47:18.398252170Z" level=info msg="StartContainer for \"802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833\" returns successfully" Mar 10 01:47:18.426929 systemd[1]: cri-containerd-802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833.scope: Deactivated successfully. Mar 10 01:47:18.463434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833-rootfs.mount: Deactivated successfully. Mar 10 01:47:18.627354 containerd[1511]: time="2026-03-10T01:47:18.627255869Z" level=info msg="shim disconnected" id=802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833 namespace=k8s.io Mar 10 01:47:18.627354 containerd[1511]: time="2026-03-10T01:47:18.627350005Z" level=warning msg="cleaning up after shim disconnected" id=802e7e6e3a819246b4b2ce9ba7a1c84617af1acce2ac18726b86082a8556a833 namespace=k8s.io Mar 10 01:47:18.627354 containerd[1511]: time="2026-03-10T01:47:18.627371360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:47:18.717444 kubelet[2643]: I0310 01:47:18.716515 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:47:18.721418 containerd[1511]: time="2026-03-10T01:47:18.720880341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 01:47:19.491783 kubelet[2643]: E0310 01:47:19.491710 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:21.492609 kubelet[2643]: E0310 01:47:21.492463 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:22.400602 kubelet[2643]: I0310 01:47:22.399976 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:47:23.492589 kubelet[2643]: E0310 01:47:23.492407 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:25.492765 kubelet[2643]: E0310 01:47:25.492694 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:27.492649 kubelet[2643]: E0310 01:47:27.492579 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:29.492719 kubelet[2643]: E0310 01:47:29.492649 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:30.495471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268495822.mount: Deactivated successfully. Mar 10 01:47:30.548615 containerd[1511]: time="2026-03-10T01:47:30.545186162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 10 01:47:30.549254 containerd[1511]: time="2026-03-10T01:47:30.542471318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:30.550612 containerd[1511]: time="2026-03-10T01:47:30.550575874Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:30.551900 containerd[1511]: time="2026-03-10T01:47:30.551863885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.830892895s" Mar 10 01:47:30.552067 containerd[1511]: time="2026-03-10T01:47:30.552036781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 10 01:47:30.552657 containerd[1511]: time="2026-03-10T01:47:30.552618620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:30.563717 containerd[1511]: time="2026-03-10T01:47:30.563678612Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 01:47:30.599080 containerd[1511]: time="2026-03-10T01:47:30.598857593Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545\"" Mar 10 01:47:30.601227 containerd[1511]: time="2026-03-10T01:47:30.600266773Z" level=info msg="StartContainer for \"a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545\"" Mar 10 01:47:30.695732 systemd[1]: Started cri-containerd-a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545.scope - libcontainer container a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545. Mar 10 01:47:30.745063 containerd[1511]: time="2026-03-10T01:47:30.743946089Z" level=info msg="StartContainer for \"a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545\" returns successfully" Mar 10 01:47:30.850598 systemd[1]: cri-containerd-a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545.scope: Deactivated successfully. Mar 10 01:47:30.887608 containerd[1511]: time="2026-03-10T01:47:30.885362148Z" level=info msg="shim disconnected" id=a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545 namespace=k8s.io Mar 10 01:47:30.888768 containerd[1511]: time="2026-03-10T01:47:30.887613160Z" level=warning msg="cleaning up after shim disconnected" id=a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545 namespace=k8s.io Mar 10 01:47:30.888768 containerd[1511]: time="2026-03-10T01:47:30.887640577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:47:31.494596 kubelet[2643]: E0310 01:47:31.492411 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:31.496263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1031c557c544f0bb95d3b39154426163d67d86471a7dc14da363ddaee538545-rootfs.mount: Deactivated successfully. Mar 10 01:47:31.764897 containerd[1511]: time="2026-03-10T01:47:31.764741752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 01:47:33.492067 kubelet[2643]: E0310 01:47:33.492007 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:35.492651 kubelet[2643]: E0310 01:47:35.492140 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:36.513349 containerd[1511]: time="2026-03-10T01:47:36.512151526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:36.513349 containerd[1511]: time="2026-03-10T01:47:36.513278784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 10 01:47:36.514285 containerd[1511]: time="2026-03-10T01:47:36.513913022Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:36.516891 containerd[1511]: time="2026-03-10T01:47:36.516858165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:36.518068 containerd[1511]: time="2026-03-10T01:47:36.518029870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.753231665s" Mar 10 01:47:36.518158 containerd[1511]: time="2026-03-10T01:47:36.518073455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 10 01:47:36.523002 containerd[1511]: time="2026-03-10T01:47:36.522868774Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 01:47:36.549002 containerd[1511]: time="2026-03-10T01:47:36.548959308Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4\"" Mar 10 01:47:36.550597 containerd[1511]: time="2026-03-10T01:47:36.550547480Z" level=info msg="StartContainer for \"2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4\"" Mar 10 01:47:36.598771 systemd[1]: Started cri-containerd-2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4.scope - libcontainer container 2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4. Mar 10 01:47:36.648659 containerd[1511]: time="2026-03-10T01:47:36.648599862Z" level=info msg="StartContainer for \"2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4\" returns successfully" Mar 10 01:47:37.492575 kubelet[2643]: E0310 01:47:37.492482 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:37.713375 systemd[1]: cri-containerd-2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4.scope: Deactivated successfully. Mar 10 01:47:37.746340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4-rootfs.mount: Deactivated successfully. Mar 10 01:47:37.798985 kubelet[2643]: I0310 01:47:37.798953 2643 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 10 01:47:37.805742 containerd[1511]: time="2026-03-10T01:47:37.805674450Z" level=info msg="shim disconnected" id=2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4 namespace=k8s.io Mar 10 01:47:37.805742 containerd[1511]: time="2026-03-10T01:47:37.805741604Z" level=warning msg="cleaning up after shim disconnected" id=2b43ae496eca47312be0c26c63c650011ebb629af2a5cd1fdfcf0dfc60bee8c4 namespace=k8s.io Mar 10 01:47:37.806440 containerd[1511]: time="2026-03-10T01:47:37.805757075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:47:37.904486 systemd[1]: Created slice kubepods-burstable-pod6e988b93_f8c5_467f_9c4d_91992323f92f.slice - libcontainer container kubepods-burstable-pod6e988b93_f8c5_467f_9c4d_91992323f92f.slice. Mar 10 01:47:37.921490 kubelet[2643]: I0310 01:47:37.919466 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e988b93-f8c5-467f-9c4d-91992323f92f-config-volume\") pod \"coredns-7d764666f9-nsjn4\" (UID: \"6e988b93-f8c5-467f-9c4d-91992323f92f\") " pod="kube-system/coredns-7d764666f9-nsjn4" Mar 10 01:47:37.922067 kubelet[2643]: I0310 01:47:37.921813 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlg9v\" (UniqueName: \"kubernetes.io/projected/6e988b93-f8c5-467f-9c4d-91992323f92f-kube-api-access-nlg9v\") pod \"coredns-7d764666f9-nsjn4\" (UID: \"6e988b93-f8c5-467f-9c4d-91992323f92f\") " pod="kube-system/coredns-7d764666f9-nsjn4" Mar 10 01:47:37.924801 systemd[1]: Created slice kubepods-burstable-podab365807_38ad_4172_ae3a_3060d423ffa4.slice - libcontainer container kubepods-burstable-podab365807_38ad_4172_ae3a_3060d423ffa4.slice. Mar 10 01:47:37.940606 systemd[1]: Created slice kubepods-besteffort-pod2c23487f_710c_4788_9a24_3cedb377bd4f.slice - libcontainer container kubepods-besteffort-pod2c23487f_710c_4788_9a24_3cedb377bd4f.slice. Mar 10 01:47:37.956807 systemd[1]: Created slice kubepods-besteffort-pod210382ca_0415_41bb_ab2d_47423643647d.slice - libcontainer container kubepods-besteffort-pod210382ca_0415_41bb_ab2d_47423643647d.slice. Mar 10 01:47:37.973184 systemd[1]: Created slice kubepods-besteffort-pod9cd8a000_7827_4762_8257_bcda3540a9d8.slice - libcontainer container kubepods-besteffort-pod9cd8a000_7827_4762_8257_bcda3540a9d8.slice. Mar 10 01:47:37.990231 systemd[1]: Created slice kubepods-besteffort-poded5442cf_d7ee_42ff_87df_7d6e6ec79b47.slice - libcontainer container kubepods-besteffort-poded5442cf_d7ee_42ff_87df_7d6e6ec79b47.slice. Mar 10 01:47:38.001122 systemd[1]: Created slice kubepods-besteffort-pod940d2b11_362a_4d5f_8ab9_dd1a5a2dba04.slice - libcontainer container kubepods-besteffort-pod940d2b11_362a_4d5f_8ab9_dd1a5a2dba04.slice. Mar 10 01:47:38.022783 kubelet[2643]: I0310 01:47:38.022719 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8pq\" (UniqueName: \"kubernetes.io/projected/940d2b11-362a-4d5f-8ab9-dd1a5a2dba04-kube-api-access-sd8pq\") pod \"goldmane-9f7667bb8-gcmtp\" (UID: \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\") " pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.022783 kubelet[2643]: I0310 01:47:38.022791 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt27\" (UniqueName: \"kubernetes.io/projected/9cd8a000-7827-4762-8257-bcda3540a9d8-kube-api-access-rtt27\") pod \"whisker-c65d7f56b-799kz\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.023076 kubelet[2643]: I0310 01:47:38.022841 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wd8t\" (UniqueName: \"kubernetes.io/projected/ab365807-38ad-4172-ae3a-3060d423ffa4-kube-api-access-6wd8t\") pod \"coredns-7d764666f9-s4jtv\" (UID: \"ab365807-38ad-4172-ae3a-3060d423ffa4\") " pod="kube-system/coredns-7d764666f9-s4jtv" Mar 10 01:47:38.023076 kubelet[2643]: I0310 01:47:38.022869 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940d2b11-362a-4d5f-8ab9-dd1a5a2dba04-config\") pod \"goldmane-9f7667bb8-gcmtp\" (UID: \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\") " pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.023076 kubelet[2643]: I0310 01:47:38.022907 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/940d2b11-362a-4d5f-8ab9-dd1a5a2dba04-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-gcmtp\" (UID: \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\") " pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.023076 kubelet[2643]: I0310 01:47:38.022956 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-backend-key-pair\") pod \"whisker-c65d7f56b-799kz\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.023076 kubelet[2643]: I0310 01:47:38.022997 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c23487f-710c-4788-9a24-3cedb377bd4f-tigera-ca-bundle\") pod \"calico-kube-controllers-5b78bbc6cb-6czx4\" (UID: \"2c23487f-710c-4788-9a24-3cedb377bd4f\") " pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" Mar 10 01:47:38.023279 kubelet[2643]: I0310 01:47:38.023023 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749kt\" (UniqueName: \"kubernetes.io/projected/2c23487f-710c-4788-9a24-3cedb377bd4f-kube-api-access-749kt\") pod \"calico-kube-controllers-5b78bbc6cb-6czx4\" (UID: \"2c23487f-710c-4788-9a24-3cedb377bd4f\") " pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" Mar 10 01:47:38.023279 kubelet[2643]: I0310 01:47:38.023052 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhw8s\" (UniqueName: \"kubernetes.io/projected/210382ca-0415-41bb-ab2d-47423643647d-kube-api-access-fhw8s\") pod \"calico-apiserver-6d978ccb84-c5q29\" (UID: \"210382ca-0415-41bb-ab2d-47423643647d\") " pod="calico-system/calico-apiserver-6d978ccb84-c5q29" Mar 10 01:47:38.023279 kubelet[2643]: I0310 01:47:38.023076 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab365807-38ad-4172-ae3a-3060d423ffa4-config-volume\") pod \"coredns-7d764666f9-s4jtv\" (UID: \"ab365807-38ad-4172-ae3a-3060d423ffa4\") " pod="kube-system/coredns-7d764666f9-s4jtv" Mar 10 01:47:38.023279 kubelet[2643]: I0310 01:47:38.023106 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/940d2b11-362a-4d5f-8ab9-dd1a5a2dba04-goldmane-key-pair\") pod \"goldmane-9f7667bb8-gcmtp\" (UID: \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\") " pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.023279 kubelet[2643]: I0310 01:47:38.023137 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw4bn\" (UniqueName: \"kubernetes.io/projected/ed5442cf-d7ee-42ff-87df-7d6e6ec79b47-kube-api-access-jw4bn\") pod \"calico-apiserver-6d978ccb84-5rk88\" (UID: \"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47\") " pod="calico-system/calico-apiserver-6d978ccb84-5rk88" Mar 10 01:47:38.023491 kubelet[2643]: I0310 01:47:38.023161 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-nginx-config\") pod \"whisker-c65d7f56b-799kz\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.023491 kubelet[2643]: I0310 01:47:38.023184 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-ca-bundle\") pod \"whisker-c65d7f56b-799kz\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.023491 kubelet[2643]: I0310 01:47:38.023217 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/210382ca-0415-41bb-ab2d-47423643647d-calico-apiserver-certs\") pod \"calico-apiserver-6d978ccb84-c5q29\" (UID: \"210382ca-0415-41bb-ab2d-47423643647d\") " pod="calico-system/calico-apiserver-6d978ccb84-c5q29" Mar 10 01:47:38.023491 kubelet[2643]: I0310 01:47:38.023255 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed5442cf-d7ee-42ff-87df-7d6e6ec79b47-calico-apiserver-certs\") pod \"calico-apiserver-6d978ccb84-5rk88\" (UID: \"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47\") " pod="calico-system/calico-apiserver-6d978ccb84-5rk88" Mar 10 01:47:38.215631 containerd[1511]: time="2026-03-10T01:47:38.215043895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nsjn4,Uid:6e988b93-f8c5-467f-9c4d-91992323f92f,Namespace:kube-system,Attempt:0,}" Mar 10 01:47:38.238381 containerd[1511]: time="2026-03-10T01:47:38.238047097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4jtv,Uid:ab365807-38ad-4172-ae3a-3060d423ffa4,Namespace:kube-system,Attempt:0,}" Mar 10 01:47:38.261015 containerd[1511]: time="2026-03-10T01:47:38.260202871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b78bbc6cb-6czx4,Uid:2c23487f-710c-4788-9a24-3cedb377bd4f,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:38.283926 containerd[1511]: time="2026-03-10T01:47:38.283833716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-c5q29,Uid:210382ca-0415-41bb-ab2d-47423643647d,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:38.296340 containerd[1511]: time="2026-03-10T01:47:38.296272683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c65d7f56b-799kz,Uid:9cd8a000-7827-4762-8257-bcda3540a9d8,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:38.299155 containerd[1511]: time="2026-03-10T01:47:38.299112374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-5rk88,Uid:ed5442cf-d7ee-42ff-87df-7d6e6ec79b47,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:38.329961 containerd[1511]: time="2026-03-10T01:47:38.329868925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gcmtp,Uid:940d2b11-362a-4d5f-8ab9-dd1a5a2dba04,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:38.677386 containerd[1511]: time="2026-03-10T01:47:38.677321816Z" level=error msg="Failed to destroy network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.679593 containerd[1511]: time="2026-03-10T01:47:38.679540711Z" level=error msg="Failed to destroy network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.683398 containerd[1511]: time="2026-03-10T01:47:38.683363860Z" level=error msg="encountered an error cleaning up failed sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.683942 containerd[1511]: time="2026-03-10T01:47:38.683905034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b78bbc6cb-6czx4,Uid:2c23487f-710c-4788-9a24-3cedb377bd4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.685573 kubelet[2643]: E0310 01:47:38.684647 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.685573 kubelet[2643]: E0310 01:47:38.684777 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" Mar 10 01:47:38.685573 kubelet[2643]: E0310 01:47:38.684833 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" Mar 10 01:47:38.687102 kubelet[2643]: E0310 01:47:38.684947 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b78bbc6cb-6czx4_calico-system(2c23487f-710c-4788-9a24-3cedb377bd4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b78bbc6cb-6czx4_calico-system(2c23487f-710c-4788-9a24-3cedb377bd4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" podUID="2c23487f-710c-4788-9a24-3cedb377bd4f" Mar 10 01:47:38.690717 containerd[1511]: time="2026-03-10T01:47:38.689700523Z" level=error msg="encountered an error cleaning up failed sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.693082 containerd[1511]: time="2026-03-10T01:47:38.693041849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4jtv,Uid:ab365807-38ad-4172-ae3a-3060d423ffa4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.695080 kubelet[2643]: E0310 01:47:38.694849 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.695080 kubelet[2643]: E0310 01:47:38.695051 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-s4jtv" Mar 10 01:47:38.695860 kubelet[2643]: E0310 01:47:38.695080 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-s4jtv" Mar 10 01:47:38.695860 kubelet[2643]: E0310 01:47:38.695158 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-s4jtv_kube-system(ab365807-38ad-4172-ae3a-3060d423ffa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-s4jtv_kube-system(ab365807-38ad-4172-ae3a-3060d423ffa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-s4jtv" podUID="ab365807-38ad-4172-ae3a-3060d423ffa4" Mar 10 01:47:38.712276 containerd[1511]: time="2026-03-10T01:47:38.712111758Z" level=error msg="Failed to destroy network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.712837 containerd[1511]: time="2026-03-10T01:47:38.712782408Z" level=error msg="encountered an error cleaning up failed sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.712919 containerd[1511]: time="2026-03-10T01:47:38.712859300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c65d7f56b-799kz,Uid:9cd8a000-7827-4762-8257-bcda3540a9d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.713363 kubelet[2643]: E0310 01:47:38.713171 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.713363 kubelet[2643]: E0310 01:47:38.713248 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.713363 kubelet[2643]: E0310 01:47:38.713275 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c65d7f56b-799kz" Mar 10 01:47:38.715157 kubelet[2643]: E0310 01:47:38.713358 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c65d7f56b-799kz_calico-system(9cd8a000-7827-4762-8257-bcda3540a9d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c65d7f56b-799kz_calico-system(9cd8a000-7827-4762-8257-bcda3540a9d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c65d7f56b-799kz" podUID="9cd8a000-7827-4762-8257-bcda3540a9d8" Mar 10 01:47:38.726007 containerd[1511]: time="2026-03-10T01:47:38.725848951Z" level=error msg="Failed to destroy network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.726691 containerd[1511]: time="2026-03-10T01:47:38.726602948Z" level=error msg="encountered an error cleaning up failed sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.726968 containerd[1511]: time="2026-03-10T01:47:38.726845114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gcmtp,Uid:940d2b11-362a-4d5f-8ab9-dd1a5a2dba04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.727202 kubelet[2643]: E0310 01:47:38.727088 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.727202 kubelet[2643]: E0310 01:47:38.727149 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.728080 kubelet[2643]: E0310 01:47:38.727369 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-gcmtp" Mar 10 01:47:38.728080 kubelet[2643]: E0310 01:47:38.727457 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-gcmtp_calico-system(940d2b11-362a-4d5f-8ab9-dd1a5a2dba04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-gcmtp_calico-system(940d2b11-362a-4d5f-8ab9-dd1a5a2dba04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-gcmtp" podUID="940d2b11-362a-4d5f-8ab9-dd1a5a2dba04" Mar 10 01:47:38.737848 containerd[1511]: time="2026-03-10T01:47:38.737671311Z" level=error msg="Failed to destroy network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.738835 containerd[1511]: time="2026-03-10T01:47:38.738639643Z" level=error msg="encountered an error cleaning up failed sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.738835 containerd[1511]: time="2026-03-10T01:47:38.738710926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-5rk88,Uid:ed5442cf-d7ee-42ff-87df-7d6e6ec79b47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.739154 kubelet[2643]: E0310 01:47:38.739067 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.739154 kubelet[2643]: E0310 01:47:38.739133 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d978ccb84-5rk88" Mar 10 01:47:38.739647 kubelet[2643]: E0310 01:47:38.739171 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d978ccb84-5rk88" Mar 10 01:47:38.739647 kubelet[2643]: E0310 01:47:38.739249 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d978ccb84-5rk88_calico-system(ed5442cf-d7ee-42ff-87df-7d6e6ec79b47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d978ccb84-5rk88_calico-system(ed5442cf-d7ee-42ff-87df-7d6e6ec79b47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d978ccb84-5rk88" podUID="ed5442cf-d7ee-42ff-87df-7d6e6ec79b47" Mar 10 01:47:38.754564 containerd[1511]: time="2026-03-10T01:47:38.752840354Z" level=error msg="Failed to destroy network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.755023 containerd[1511]: time="2026-03-10T01:47:38.754972892Z" level=error msg="encountered an error cleaning up failed sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.755293 containerd[1511]: time="2026-03-10T01:47:38.755250282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-c5q29,Uid:210382ca-0415-41bb-ab2d-47423643647d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.758692 kubelet[2643]: E0310 01:47:38.755858 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.758692 kubelet[2643]: E0310 01:47:38.755931 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d978ccb84-c5q29" Mar 10 01:47:38.758692 kubelet[2643]: E0310 01:47:38.755968 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d978ccb84-c5q29" Mar 10 01:47:38.758880 containerd[1511]: time="2026-03-10T01:47:38.757622892Z" level=error msg="Failed to destroy network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.758950 kubelet[2643]: E0310 01:47:38.756068 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d978ccb84-c5q29_calico-system(210382ca-0415-41bb-ab2d-47423643647d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d978ccb84-c5q29_calico-system(210382ca-0415-41bb-ab2d-47423643647d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d978ccb84-c5q29" podUID="210382ca-0415-41bb-ab2d-47423643647d" Mar 10 01:47:38.764930 containerd[1511]: time="2026-03-10T01:47:38.764664736Z" level=error msg="encountered an error cleaning up failed sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.764930 containerd[1511]: time="2026-03-10T01:47:38.764738599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nsjn4,Uid:6e988b93-f8c5-467f-9c4d-91992323f92f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.765125 kubelet[2643]: E0310 01:47:38.764977 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.765125 kubelet[2643]: E0310 01:47:38.765060 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nsjn4" Mar 10 01:47:38.765125 kubelet[2643]: E0310 01:47:38.765100 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nsjn4" Mar 10 01:47:38.765284 kubelet[2643]: E0310 01:47:38.765177 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-nsjn4_kube-system(6e988b93-f8c5-467f-9c4d-91992323f92f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-nsjn4_kube-system(6e988b93-f8c5-467f-9c4d-91992323f92f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-nsjn4" podUID="6e988b93-f8c5-467f-9c4d-91992323f92f" Mar 10 01:47:38.779524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb-shm.mount: Deactivated successfully. Mar 10 01:47:38.780169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422-shm.mount: Deactivated successfully. Mar 10 01:47:38.793819 kubelet[2643]: I0310 01:47:38.793779 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:38.797554 kubelet[2643]: I0310 01:47:38.797191 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:38.802862 kubelet[2643]: I0310 01:47:38.802738 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:38.806586 kubelet[2643]: I0310 01:47:38.806560 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:38.825726 containerd[1511]: time="2026-03-10T01:47:38.825517509Z" level=info msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" Mar 10 01:47:38.830061 containerd[1511]: time="2026-03-10T01:47:38.828588581Z" level=info msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" Mar 10 01:47:38.832004 containerd[1511]: time="2026-03-10T01:47:38.830463364Z" level=info msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" Mar 10 01:47:38.832004 containerd[1511]: time="2026-03-10T01:47:38.830852694Z" level=info msg="Ensure that sandbox 08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb in task-service has been cleanup successfully" Mar 10 01:47:38.832558 containerd[1511]: time="2026-03-10T01:47:38.832236089Z" level=info msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" Mar 10 01:47:38.836425 containerd[1511]: time="2026-03-10T01:47:38.836389996Z" level=info msg="Ensure that sandbox 6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422 in task-service has been cleanup successfully" Mar 10 01:47:38.839893 containerd[1511]: time="2026-03-10T01:47:38.839859587Z" level=info msg="Ensure that sandbox 5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0 in task-service has been cleanup successfully" Mar 10 01:47:38.844545 containerd[1511]: time="2026-03-10T01:47:38.844380964Z" level=info msg="Ensure that sandbox b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8 in task-service has been cleanup successfully" Mar 10 01:47:38.857809 kubelet[2643]: I0310 01:47:38.856051 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:38.865637 containerd[1511]: time="2026-03-10T01:47:38.864866271Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 01:47:38.873806 kubelet[2643]: I0310 01:47:38.873754 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:38.874114 containerd[1511]: time="2026-03-10T01:47:38.868046251Z" level=info msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" Mar 10 01:47:38.874450 containerd[1511]: time="2026-03-10T01:47:38.874392275Z" level=info msg="Ensure that sandbox ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18 in task-service has been cleanup successfully" Mar 10 01:47:38.886562 containerd[1511]: time="2026-03-10T01:47:38.886212647Z" level=info msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" Mar 10 01:47:38.886562 containerd[1511]: time="2026-03-10T01:47:38.886497222Z" level=info msg="Ensure that sandbox e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a in task-service has been cleanup successfully" Mar 10 01:47:38.906980 kubelet[2643]: I0310 01:47:38.906926 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:38.909244 containerd[1511]: time="2026-03-10T01:47:38.909102233Z" level=info msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" Mar 10 01:47:38.909479 containerd[1511]: time="2026-03-10T01:47:38.909315016Z" level=info msg="Ensure that sandbox c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3 in task-service has been cleanup successfully" Mar 10 01:47:38.956911 containerd[1511]: time="2026-03-10T01:47:38.956676332Z" level=info msg="CreateContainer within sandbox \"9bda1ad541be584212fff31fc4b4c479ede79caa5c73aaa519a49d16d2304e8e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606\"" Mar 10 01:47:38.963562 containerd[1511]: time="2026-03-10T01:47:38.962941533Z" level=info msg="StartContainer for \"21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606\"" Mar 10 01:47:38.984461 containerd[1511]: time="2026-03-10T01:47:38.984379818Z" level=error msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" failed" error="failed to destroy network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:38.985120 kubelet[2643]: E0310 01:47:38.985047 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:38.986421 kubelet[2643]: E0310 01:47:38.985140 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422"} Mar 10 01:47:38.988616 kubelet[2643]: E0310 01:47:38.986444 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e988b93-f8c5-467f-9c4d-91992323f92f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:38.988616 kubelet[2643]: E0310 01:47:38.986487 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e988b93-f8c5-467f-9c4d-91992323f92f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-nsjn4" podUID="6e988b93-f8c5-467f-9c4d-91992323f92f" Mar 10 01:47:39.046083 containerd[1511]: time="2026-03-10T01:47:39.046008069Z" level=error msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" failed" error="failed to destroy network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.046645 kubelet[2643]: E0310 01:47:39.046600 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:39.046759 kubelet[2643]: E0310 01:47:39.046660 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18"} Mar 10 01:47:39.046759 kubelet[2643]: E0310 01:47:39.046707 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.046759 kubelet[2643]: E0310 01:47:39.046742 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d978ccb84-5rk88" podUID="ed5442cf-d7ee-42ff-87df-7d6e6ec79b47" Mar 10 01:47:39.049110 containerd[1511]: time="2026-03-10T01:47:39.048979301Z" level=error msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" failed" error="failed to destroy network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.049587 kubelet[2643]: E0310 01:47:39.049457 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:39.049811 kubelet[2643]: E0310 01:47:39.049757 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a"} Mar 10 01:47:39.049908 kubelet[2643]: E0310 01:47:39.049821 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9cd8a000-7827-4762-8257-bcda3540a9d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.049908 kubelet[2643]: E0310 01:47:39.049869 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9cd8a000-7827-4762-8257-bcda3540a9d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c65d7f56b-799kz" podUID="9cd8a000-7827-4762-8257-bcda3540a9d8" Mar 10 01:47:39.059713 containerd[1511]: time="2026-03-10T01:47:39.059520129Z" level=error msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" failed" error="failed to destroy network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.060111 kubelet[2643]: E0310 01:47:39.059886 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:39.060111 kubelet[2643]: E0310 01:47:39.059956 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0"} Mar 10 01:47:39.060668 kubelet[2643]: E0310 01:47:39.059998 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab365807-38ad-4172-ae3a-3060d423ffa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.060668 kubelet[2643]: E0310 01:47:39.060313 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab365807-38ad-4172-ae3a-3060d423ffa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-s4jtv" podUID="ab365807-38ad-4172-ae3a-3060d423ffa4" Mar 10 01:47:39.073844 containerd[1511]: time="2026-03-10T01:47:39.073783157Z" level=error msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" failed" error="failed to destroy network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.074664 containerd[1511]: time="2026-03-10T01:47:39.074494298Z" level=error msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" failed" error="failed to destroy network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.075789 kubelet[2643]: E0310 01:47:39.075717 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:39.075907 kubelet[2643]: E0310 01:47:39.075810 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8"} Mar 10 01:47:39.075907 kubelet[2643]: E0310 01:47:39.075853 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.075907 kubelet[2643]: E0310 01:47:39.075889 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-gcmtp" podUID="940d2b11-362a-4d5f-8ab9-dd1a5a2dba04" Mar 10 01:47:39.076412 kubelet[2643]: E0310 01:47:39.075952 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:39.076412 kubelet[2643]: E0310 01:47:39.075984 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb"} Mar 10 01:47:39.076412 kubelet[2643]: E0310 01:47:39.076018 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"210382ca-0415-41bb-ab2d-47423643647d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.076412 kubelet[2643]: E0310 01:47:39.076054 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"210382ca-0415-41bb-ab2d-47423643647d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d978ccb84-c5q29" podUID="210382ca-0415-41bb-ab2d-47423643647d" Mar 10 01:47:39.093426 containerd[1511]: time="2026-03-10T01:47:39.093371096Z" level=error msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" failed" error="failed to destroy network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.093989 kubelet[2643]: E0310 01:47:39.093598 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:39.093989 kubelet[2643]: E0310 01:47:39.093658 2643 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3"} Mar 10 01:47:39.093989 kubelet[2643]: E0310 01:47:39.093697 2643 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c23487f-710c-4788-9a24-3cedb377bd4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:47:39.093989 kubelet[2643]: E0310 01:47:39.093731 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c23487f-710c-4788-9a24-3cedb377bd4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" podUID="2c23487f-710c-4788-9a24-3cedb377bd4f" Mar 10 01:47:39.094778 systemd[1]: Started cri-containerd-21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606.scope - libcontainer container 21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606. Mar 10 01:47:39.157262 containerd[1511]: time="2026-03-10T01:47:39.157208285Z" level=info msg="StartContainer for \"21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606\" returns successfully" Mar 10 01:47:39.505394 systemd[1]: Created slice kubepods-besteffort-pod4dccb67c_d463_4240_ae02_985ee84c0680.slice - libcontainer container kubepods-besteffort-pod4dccb67c_d463_4240_ae02_985ee84c0680.slice. Mar 10 01:47:39.523273 containerd[1511]: time="2026-03-10T01:47:39.523216062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlwxn,Uid:4dccb67c-d463-4240-ae02-985ee84c0680,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:39.640416 containerd[1511]: time="2026-03-10T01:47:39.640333178Z" level=error msg="Failed to destroy network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.642169 containerd[1511]: time="2026-03-10T01:47:39.641859243Z" level=error msg="encountered an error cleaning up failed sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.642446 containerd[1511]: time="2026-03-10T01:47:39.642047988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlwxn,Uid:4dccb67c-d463-4240-ae02-985ee84c0680,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.643027 kubelet[2643]: E0310 01:47:39.642956 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:47:39.643230 kubelet[2643]: E0310 01:47:39.643043 2643 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:39.643230 kubelet[2643]: E0310 01:47:39.643073 2643 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlwxn" Mar 10 01:47:39.643669 kubelet[2643]: E0310 01:47:39.643212 2643 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlwxn_calico-system(4dccb67c-d463-4240-ae02-985ee84c0680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlwxn_calico-system(4dccb67c-d463-4240-ae02-985ee84c0680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlwxn" podUID="4dccb67c-d463-4240-ae02-985ee84c0680" Mar 10 01:47:39.751840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986551243.mount: Deactivated successfully. Mar 10 01:47:39.923594 kubelet[2643]: I0310 01:47:39.921993 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:39.924417 containerd[1511]: time="2026-03-10T01:47:39.924331721Z" level=info msg="StopPodSandbox for \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\"" Mar 10 01:47:39.927956 containerd[1511]: time="2026-03-10T01:47:39.927922749Z" level=info msg="Ensure that sandbox e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10 in task-service has been cleanup successfully" Mar 10 01:47:39.929895 containerd[1511]: time="2026-03-10T01:47:39.929843587Z" level=info msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" Mar 10 01:47:39.954780 kubelet[2643]: I0310 01:47:39.952680 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-94n98" podStartSLOduration=1.727395281 podStartE2EDuration="27.952646624s" podCreationTimestamp="2026-03-10 01:47:12 +0000 UTC" firstStartedPulling="2026-03-10 01:47:12.590651839 +0000 UTC m=+22.321450889" lastFinishedPulling="2026-03-10 01:47:38.815903172 +0000 UTC m=+48.546702232" observedRunningTime="2026-03-10 01:47:39.945369251 +0000 UTC m=+49.676168303" watchObservedRunningTime="2026-03-10 01:47:39.952646624 +0000 UTC m=+49.683445683" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.080 [INFO][3919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.082 [INFO][3919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" iface="eth0" netns="/var/run/netns/cni-0260f95b-9036-b84f-1f01-90ce42f398e4" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.083 [INFO][3919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" iface="eth0" netns="/var/run/netns/cni-0260f95b-9036-b84f-1f01-90ce42f398e4" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.089 [INFO][3919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" iface="eth0" netns="/var/run/netns/cni-0260f95b-9036-b84f-1f01-90ce42f398e4" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.089 [INFO][3919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.089 [INFO][3919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.185 [INFO][3946] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.185 [INFO][3946] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.185 [INFO][3946] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.203 [WARNING][3946] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.203 [INFO][3946] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.208 [INFO][3946] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:40.217859 containerd[1511]: 2026-03-10 01:47:40.214 [INFO][3919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:40.218497 containerd[1511]: time="2026-03-10T01:47:40.217968713Z" level=info msg="TearDown network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" successfully" Mar 10 01:47:40.218497 containerd[1511]: time="2026-03-10T01:47:40.218004124Z" level=info msg="StopPodSandbox for \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" returns successfully" Mar 10 01:47:40.223764 systemd[1]: run-netns-cni\x2d0260f95b\x2d9036\x2db84f\x2d1f01\x2d90ce42f398e4.mount: Deactivated successfully. Mar 10 01:47:40.249912 containerd[1511]: time="2026-03-10T01:47:40.249860415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlwxn,Uid:4dccb67c-d463-4240-ae02-985ee84c0680,Namespace:calico-system,Attempt:1,}" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.125 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.129 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" iface="eth0" netns="/var/run/netns/cni-73993d54-ffc8-a737-ac9c-1be3af25840f" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.129 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" iface="eth0" netns="/var/run/netns/cni-73993d54-ffc8-a737-ac9c-1be3af25840f" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.130 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" iface="eth0" netns="/var/run/netns/cni-73993d54-ffc8-a737-ac9c-1be3af25840f" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.130 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.130 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.236 [INFO][3966] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.236 [INFO][3966] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.237 [INFO][3966] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.255 [WARNING][3966] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.255 [INFO][3966] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.258 [INFO][3966] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:40.265334 containerd[1511]: 2026-03-10 01:47:40.262 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:40.268837 containerd[1511]: time="2026-03-10T01:47:40.267254861Z" level=info msg="TearDown network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" successfully" Mar 10 01:47:40.268837 containerd[1511]: time="2026-03-10T01:47:40.267305885Z" level=info msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" returns successfully" Mar 10 01:47:40.271213 systemd[1]: run-netns-cni\x2d73993d54\x2dffc8\x2da737\x2dac9c\x2d1be3af25840f.mount: Deactivated successfully. Mar 10 01:47:40.449744 kubelet[2643]: I0310 01:47:40.449568 2643 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-backend-key-pair\") pod \"9cd8a000-7827-4762-8257-bcda3540a9d8\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " Mar 10 01:47:40.458520 kubelet[2643]: I0310 01:47:40.458482 2643 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-nginx-config\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-nginx-config\") pod \"9cd8a000-7827-4762-8257-bcda3540a9d8\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " Mar 10 01:47:40.458774 kubelet[2643]: I0310 01:47:40.458749 2643 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9cd8a000-7827-4762-8257-bcda3540a9d8-kube-api-access-rtt27\" (UniqueName: \"kubernetes.io/projected/9cd8a000-7827-4762-8257-bcda3540a9d8-kube-api-access-rtt27\") pod \"9cd8a000-7827-4762-8257-bcda3540a9d8\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " Mar 10 01:47:40.458913 kubelet[2643]: I0310 01:47:40.458891 2643 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-ca-bundle\") pod \"9cd8a000-7827-4762-8257-bcda3540a9d8\" (UID: \"9cd8a000-7827-4762-8257-bcda3540a9d8\") " Mar 10 01:47:40.459542 kubelet[2643]: I0310 01:47:40.459494 2643 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-backend-key-pair" pod "9cd8a000-7827-4762-8257-bcda3540a9d8" (UID: "9cd8a000-7827-4762-8257-bcda3540a9d8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:47:40.461892 kubelet[2643]: I0310 01:47:40.461864 2643 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-ca-bundle" pod "9cd8a000-7827-4762-8257-bcda3540a9d8" (UID: "9cd8a000-7827-4762-8257-bcda3540a9d8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:47:40.462063 kubelet[2643]: I0310 01:47:40.462030 2643 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-nginx-config" pod "9cd8a000-7827-4762-8257-bcda3540a9d8" (UID: "9cd8a000-7827-4762-8257-bcda3540a9d8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:47:40.465881 kubelet[2643]: I0310 01:47:40.465818 2643 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd8a000-7827-4762-8257-bcda3540a9d8-kube-api-access-rtt27" pod "9cd8a000-7827-4762-8257-bcda3540a9d8" (UID: "9cd8a000-7827-4762-8257-bcda3540a9d8"). InnerVolumeSpecName "kube-api-access-rtt27". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:47:40.473614 systemd-networkd[1427]: calie2f3959bc67: Link UP Mar 10 01:47:40.476311 systemd-networkd[1427]: calie2f3959bc67: Gained carrier Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.325 [ERROR][3991] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.347 [INFO][3991] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0 csi-node-driver- calico-system 4dccb67c-d463-4240-ae02-985ee84c0680 914 0 2026-03-10 01:47:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com csi-node-driver-vlwxn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2f3959bc67 [] [] }} ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.347 [INFO][3991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.388 [INFO][4003] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" HandleID="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.405 [INFO][4003] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" HandleID="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"csi-node-driver-vlwxn", "timestamp":"2026-03-10 01:47:40.388464476 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002de420)} Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.405 [INFO][4003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.405 [INFO][4003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.405 [INFO][4003] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.409 [INFO][4003] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.417 [INFO][4003] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.422 [INFO][4003] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.425 [INFO][4003] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.427 [INFO][4003] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.428 [INFO][4003] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.430 [INFO][4003] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.435 [INFO][4003] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.441 [INFO][4003] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.129/26] block=192.168.13.128/26 handle="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.441 [INFO][4003] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.129/26] handle="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.441 [INFO][4003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:40.500861 containerd[1511]: 2026-03-10 01:47:40.441 [INFO][4003] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.129/26] IPv6=[] ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" HandleID="k8s-pod-network.46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.443 [INFO][3991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4dccb67c-d463-4240-ae02-985ee84c0680", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-vlwxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2f3959bc67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.444 [INFO][3991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.129/32] ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.444 [INFO][3991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2f3959bc67 ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.475 [INFO][3991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.478 [INFO][3991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4dccb67c-d463-4240-ae02-985ee84c0680", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f", Pod:"csi-node-driver-vlwxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2f3959bc67", MAC:"8e:c9:cd:26:ed:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:40.510006 containerd[1511]: 2026-03-10 01:47:40.495 [INFO][3991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f" Namespace="calico-system" Pod="csi-node-driver-vlwxn" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:40.524757 systemd[1]: Removed slice kubepods-besteffort-pod9cd8a000_7827_4762_8257_bcda3540a9d8.slice - libcontainer container kubepods-besteffort-pod9cd8a000_7827_4762_8257_bcda3540a9d8.slice. Mar 10 01:47:40.560190 kubelet[2643]: I0310 01:47:40.559817 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-ca-bundle\") on node \"srv-p0r5l.gb1.brightbox.com\" DevicePath \"\"" Mar 10 01:47:40.560190 kubelet[2643]: I0310 01:47:40.559880 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9cd8a000-7827-4762-8257-bcda3540a9d8-whisker-backend-key-pair\") on node \"srv-p0r5l.gb1.brightbox.com\" DevicePath \"\"" Mar 10 01:47:40.560190 kubelet[2643]: I0310 01:47:40.559907 2643 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9cd8a000-7827-4762-8257-bcda3540a9d8-nginx-config\") on node \"srv-p0r5l.gb1.brightbox.com\" DevicePath \"\"" Mar 10 01:47:40.560190 kubelet[2643]: I0310 01:47:40.559922 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtt27\" (UniqueName: \"kubernetes.io/projected/9cd8a000-7827-4762-8257-bcda3540a9d8-kube-api-access-rtt27\") on node \"srv-p0r5l.gb1.brightbox.com\" DevicePath \"\"" Mar 10 01:47:40.587556 containerd[1511]: time="2026-03-10T01:47:40.587352103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:40.587556 containerd[1511]: time="2026-03-10T01:47:40.587473560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:40.587556 containerd[1511]: time="2026-03-10T01:47:40.587496477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:40.587927 containerd[1511]: time="2026-03-10T01:47:40.587659395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:40.619771 systemd[1]: Started cri-containerd-46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f.scope - libcontainer container 46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f. Mar 10 01:47:40.661750 containerd[1511]: time="2026-03-10T01:47:40.661305486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlwxn,Uid:4dccb67c-d463-4240-ae02-985ee84c0680,Namespace:calico-system,Attempt:1,} returns sandbox id \"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f\"" Mar 10 01:47:40.690393 containerd[1511]: time="2026-03-10T01:47:40.690058031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 01:47:40.749882 systemd[1]: var-lib-kubelet-pods-9cd8a000\x2d7827\x2d4762\x2d8257\x2dbcda3540a9d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtt27.mount: Deactivated successfully. Mar 10 01:47:40.750022 systemd[1]: var-lib-kubelet-pods-9cd8a000\x2d7827\x2d4762\x2d8257\x2dbcda3540a9d8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 01:47:41.052571 systemd[1]: Created slice kubepods-besteffort-pod8ac2d4d8_d3b9_4ee1_a763_10e00eeb926c.slice - libcontainer container kubepods-besteffort-pod8ac2d4d8_d3b9_4ee1_a763_10e00eeb926c.slice. Mar 10 01:47:41.163425 kubelet[2643]: I0310 01:47:41.163369 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c-nginx-config\") pod \"whisker-c4fffccf9-rdxx6\" (UID: \"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c\") " pod="calico-system/whisker-c4fffccf9-rdxx6" Mar 10 01:47:41.164840 kubelet[2643]: I0310 01:47:41.164604 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc6k\" (UniqueName: \"kubernetes.io/projected/8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c-kube-api-access-ptc6k\") pod \"whisker-c4fffccf9-rdxx6\" (UID: \"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c\") " pod="calico-system/whisker-c4fffccf9-rdxx6" Mar 10 01:47:41.164840 kubelet[2643]: I0310 01:47:41.164761 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c-whisker-backend-key-pair\") pod \"whisker-c4fffccf9-rdxx6\" (UID: \"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c\") " pod="calico-system/whisker-c4fffccf9-rdxx6" Mar 10 01:47:41.165460 kubelet[2643]: I0310 01:47:41.164838 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c-whisker-ca-bundle\") pod \"whisker-c4fffccf9-rdxx6\" (UID: \"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c\") " pod="calico-system/whisker-c4fffccf9-rdxx6" Mar 10 01:47:41.369820 containerd[1511]: time="2026-03-10T01:47:41.369420900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4fffccf9-rdxx6,Uid:8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c,Namespace:calico-system,Attempt:0,}" Mar 10 01:47:41.602846 systemd-networkd[1427]: calic19cadc583f: Link UP Mar 10 01:47:41.611816 systemd-networkd[1427]: calic19cadc583f: Gained carrier Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.451 [ERROR][4149] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.473 [INFO][4149] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0 whisker-c4fffccf9- calico-system 8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c 937 0 2026-03-10 01:47:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c4fffccf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com whisker-c4fffccf9-rdxx6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic19cadc583f [] [] }} ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.473 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.534 [INFO][4188] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" HandleID="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.550 [INFO][4188] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" HandleID="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000412080), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"whisker-c4fffccf9-rdxx6", "timestamp":"2026-03-10 01:47:41.53464731 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000d4f20)} Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.551 [INFO][4188] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.551 [INFO][4188] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.551 [INFO][4188] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.555 [INFO][4188] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.561 [INFO][4188] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.572 [INFO][4188] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.575 [INFO][4188] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.578 [INFO][4188] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.578 [INFO][4188] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.580 [INFO][4188] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44 Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.588 [INFO][4188] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.595 [INFO][4188] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.130/26] block=192.168.13.128/26 handle="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.595 [INFO][4188] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.130/26] handle="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.595 [INFO][4188] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:41.671809 containerd[1511]: 2026-03-10 01:47:41.595 [INFO][4188] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.130/26] IPv6=[] ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" HandleID="k8s-pod-network.7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.598 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0", GenerateName:"whisker-c4fffccf9-", Namespace:"calico-system", SelfLink:"", UID:"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c4fffccf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"whisker-c4fffccf9-rdxx6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic19cadc583f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.598 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.130/32] ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.598 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic19cadc583f ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.610 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.617 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0", GenerateName:"whisker-c4fffccf9-", Namespace:"calico-system", SelfLink:"", UID:"8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c4fffccf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44", Pod:"whisker-c4fffccf9-rdxx6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic19cadc583f", MAC:"9e:6a:44:d2:e4:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:41.672834 containerd[1511]: 2026-03-10 01:47:41.644 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44" Namespace="calico-system" Pod="whisker-c4fffccf9-rdxx6" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c4fffccf9--rdxx6-eth0" Mar 10 01:47:41.840634 containerd[1511]: time="2026-03-10T01:47:41.839453786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:41.840634 containerd[1511]: time="2026-03-10T01:47:41.839722450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:41.840634 containerd[1511]: time="2026-03-10T01:47:41.839806710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:41.843141 containerd[1511]: time="2026-03-10T01:47:41.841333899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:41.895773 systemd[1]: Started cri-containerd-7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44.scope - libcontainer container 7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44. Mar 10 01:47:41.970784 systemd-networkd[1427]: calie2f3959bc67: Gained IPv6LL Mar 10 01:47:42.147543 containerd[1511]: time="2026-03-10T01:47:42.147254230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4fffccf9-rdxx6,Uid:8ac2d4d8-d3b9-4ee1-a763-10e00eeb926c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44\"" Mar 10 01:47:42.360597 kernel: calico-node[4124]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 10 01:47:42.739349 systemd-networkd[1427]: calic19cadc583f: Gained IPv6LL Mar 10 01:47:42.766550 kubelet[2643]: I0310 01:47:42.758861 2643 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9cd8a000-7827-4762-8257-bcda3540a9d8" path="/var/lib/kubelet/pods/9cd8a000-7827-4762-8257-bcda3540a9d8/volumes" Mar 10 01:47:42.819949 containerd[1511]: time="2026-03-10T01:47:42.819871002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 10 01:47:42.895576 containerd[1511]: time="2026-03-10T01:47:42.892975488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.20285317s" Mar 10 01:47:42.895576 containerd[1511]: time="2026-03-10T01:47:42.893075525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 10 01:47:42.916899 containerd[1511]: time="2026-03-10T01:47:42.916626704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 01:47:42.957282 containerd[1511]: time="2026-03-10T01:47:42.956149247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:42.957886 containerd[1511]: time="2026-03-10T01:47:42.957353798Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:42.958552 containerd[1511]: time="2026-03-10T01:47:42.958445306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:42.991763 containerd[1511]: time="2026-03-10T01:47:42.981024164Z" level=info msg="CreateContainer within sandbox \"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 01:47:43.185825 containerd[1511]: time="2026-03-10T01:47:43.185762308Z" level=info msg="CreateContainer within sandbox \"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6cb44a1122a8968ff88ab7ee79e887e30b7dbd6b6d753dc66d18378589712f9e\"" Mar 10 01:47:43.188829 containerd[1511]: time="2026-03-10T01:47:43.188787396Z" level=info msg="StartContainer for \"6cb44a1122a8968ff88ab7ee79e887e30b7dbd6b6d753dc66d18378589712f9e\"" Mar 10 01:47:43.545303 systemd[1]: Started cri-containerd-6cb44a1122a8968ff88ab7ee79e887e30b7dbd6b6d753dc66d18378589712f9e.scope - libcontainer container 6cb44a1122a8968ff88ab7ee79e887e30b7dbd6b6d753dc66d18378589712f9e. Mar 10 01:47:43.688315 containerd[1511]: time="2026-03-10T01:47:43.687846332Z" level=info msg="StartContainer for \"6cb44a1122a8968ff88ab7ee79e887e30b7dbd6b6d753dc66d18378589712f9e\" returns successfully" Mar 10 01:47:43.968446 systemd-networkd[1427]: vxlan.calico: Link UP Mar 10 01:47:43.968459 systemd-networkd[1427]: vxlan.calico: Gained carrier Mar 10 01:47:44.879153 containerd[1511]: time="2026-03-10T01:47:44.878170539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:44.882956 containerd[1511]: time="2026-03-10T01:47:44.882921741Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:44.884580 containerd[1511]: time="2026-03-10T01:47:44.884512053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 10 01:47:44.890888 containerd[1511]: time="2026-03-10T01:47:44.890852387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:44.893792 containerd[1511]: time="2026-03-10T01:47:44.893756625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.977064948s" Mar 10 01:47:44.893939 containerd[1511]: time="2026-03-10T01:47:44.893909710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 10 01:47:44.923026 containerd[1511]: time="2026-03-10T01:47:44.922977306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 01:47:44.941927 containerd[1511]: time="2026-03-10T01:47:44.941365374Z" level=info msg="CreateContainer within sandbox \"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 01:47:44.992554 containerd[1511]: time="2026-03-10T01:47:44.992474937Z" level=info msg="CreateContainer within sandbox \"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"47c3c1de533371997cc6f1b678ca51d5b7ca4771e21aa0a3aa90275b69a1f7e9\"" Mar 10 01:47:44.994340 containerd[1511]: time="2026-03-10T01:47:44.994306060Z" level=info msg="StartContainer for \"47c3c1de533371997cc6f1b678ca51d5b7ca4771e21aa0a3aa90275b69a1f7e9\"" Mar 10 01:47:45.097720 systemd[1]: Started cri-containerd-47c3c1de533371997cc6f1b678ca51d5b7ca4771e21aa0a3aa90275b69a1f7e9.scope - libcontainer container 47c3c1de533371997cc6f1b678ca51d5b7ca4771e21aa0a3aa90275b69a1f7e9. Mar 10 01:47:45.157843 containerd[1511]: time="2026-03-10T01:47:45.157595546Z" level=info msg="StartContainer for \"47c3c1de533371997cc6f1b678ca51d5b7ca4771e21aa0a3aa90275b69a1f7e9\" returns successfully" Mar 10 01:47:45.554966 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Mar 10 01:47:46.983547 containerd[1511]: time="2026-03-10T01:47:46.982254022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:46.983547 containerd[1511]: time="2026-03-10T01:47:46.983422823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 10 01:47:46.984429 containerd[1511]: time="2026-03-10T01:47:46.984029523Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:46.986811 containerd[1511]: time="2026-03-10T01:47:46.986752202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:46.988497 containerd[1511]: time="2026-03-10T01:47:46.987933711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.06470742s" Mar 10 01:47:46.988497 containerd[1511]: time="2026-03-10T01:47:46.987990172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 10 01:47:47.000192 containerd[1511]: time="2026-03-10T01:47:47.000141058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 01:47:47.004415 containerd[1511]: time="2026-03-10T01:47:47.003913728Z" level=info msg="CreateContainer within sandbox \"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 01:47:47.025574 containerd[1511]: time="2026-03-10T01:47:47.023789009Z" level=info msg="CreateContainer within sandbox \"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a\"" Mar 10 01:47:47.027484 containerd[1511]: time="2026-03-10T01:47:47.027093623Z" level=info msg="StartContainer for \"a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a\"" Mar 10 01:47:47.088781 systemd[1]: run-containerd-runc-k8s.io-a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a-runc.4TLfjO.mount: Deactivated successfully. Mar 10 01:47:47.094762 systemd[1]: Started cri-containerd-a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a.scope - libcontainer container a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a. Mar 10 01:47:47.141310 containerd[1511]: time="2026-03-10T01:47:47.141253848Z" level=info msg="StartContainer for \"a945eb8fef767f755d9eb5e4089bab381b568ffd84711484f4f1f32833c2777a\" returns successfully" Mar 10 01:47:47.844989 kubelet[2643]: I0310 01:47:47.844776 2643 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 01:47:47.844989 kubelet[2643]: I0310 01:47:47.844894 2643 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 01:47:48.086776 kubelet[2643]: I0310 01:47:48.086654 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-vlwxn" podStartSLOduration=29.784625801 podStartE2EDuration="36.085245664s" podCreationTimestamp="2026-03-10 01:47:12 +0000 UTC" firstStartedPulling="2026-03-10 01:47:40.689460341 +0000 UTC m=+50.420259387" lastFinishedPulling="2026-03-10 01:47:46.990080172 +0000 UTC m=+56.720879250" observedRunningTime="2026-03-10 01:47:48.079778385 +0000 UTC m=+57.810577451" watchObservedRunningTime="2026-03-10 01:47:48.085245664 +0000 UTC m=+57.816044716" Mar 10 01:47:49.210675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976703906.mount: Deactivated successfully. Mar 10 01:47:49.235490 containerd[1511]: time="2026-03-10T01:47:49.235429612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:49.236748 containerd[1511]: time="2026-03-10T01:47:49.236688561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 10 01:47:49.237582 containerd[1511]: time="2026-03-10T01:47:49.237286840Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:49.240644 containerd[1511]: time="2026-03-10T01:47:49.240581141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:49.243708 containerd[1511]: time="2026-03-10T01:47:49.243297627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.243101528s" Mar 10 01:47:49.243708 containerd[1511]: time="2026-03-10T01:47:49.243355372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 10 01:47:49.251435 containerd[1511]: time="2026-03-10T01:47:49.251038942Z" level=info msg="CreateContainer within sandbox \"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 01:47:49.267585 containerd[1511]: time="2026-03-10T01:47:49.267508156Z" level=info msg="CreateContainer within sandbox \"7742f90723601475aac2464a79d8965584a5fdf5a92d86a3040d3e9ebdf0fa44\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0befee2795a440d0288a382786b387b2250e6451acb223c2230f3815cd03f71c\"" Mar 10 01:47:49.269007 containerd[1511]: time="2026-03-10T01:47:49.268957564Z" level=info msg="StartContainer for \"0befee2795a440d0288a382786b387b2250e6451acb223c2230f3815cd03f71c\"" Mar 10 01:47:49.320770 systemd[1]: Started cri-containerd-0befee2795a440d0288a382786b387b2250e6451acb223c2230f3815cd03f71c.scope - libcontainer container 0befee2795a440d0288a382786b387b2250e6451acb223c2230f3815cd03f71c. Mar 10 01:47:49.385994 containerd[1511]: time="2026-03-10T01:47:49.385917903Z" level=info msg="StartContainer for \"0befee2795a440d0288a382786b387b2250e6451acb223c2230f3815cd03f71c\" returns successfully" Mar 10 01:47:49.495421 containerd[1511]: time="2026-03-10T01:47:49.495044118Z" level=info msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" Mar 10 01:47:49.497564 containerd[1511]: time="2026-03-10T01:47:49.496924991Z" level=info msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.745 [INFO][4563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.748 [INFO][4563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" iface="eth0" netns="/var/run/netns/cni-20e0f8ed-4e5a-837c-caf7-3f468122437c" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.749 [INFO][4563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" iface="eth0" netns="/var/run/netns/cni-20e0f8ed-4e5a-837c-caf7-3f468122437c" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.753 [INFO][4563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" iface="eth0" netns="/var/run/netns/cni-20e0f8ed-4e5a-837c-caf7-3f468122437c" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.753 [INFO][4563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.753 [INFO][4563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.918 [INFO][4588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.919 [INFO][4588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.920 [INFO][4588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.932 [WARNING][4588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.932 [INFO][4588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.934 [INFO][4588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:49.939681 containerd[1511]: 2026-03-10 01:47:49.936 [INFO][4563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:47:49.944864 containerd[1511]: time="2026-03-10T01:47:49.943746361Z" level=info msg="TearDown network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" successfully" Mar 10 01:47:49.944864 containerd[1511]: time="2026-03-10T01:47:49.943787484Z" level=info msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" returns successfully" Mar 10 01:47:49.946434 systemd[1]: run-netns-cni\x2d20e0f8ed\x2d4e5a\x2d837c\x2dcaf7\x2d3f468122437c.mount: Deactivated successfully. Mar 10 01:47:49.953652 containerd[1511]: time="2026-03-10T01:47:49.953599230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b78bbc6cb-6czx4,Uid:2c23487f-710c-4788-9a24-3cedb377bd4f,Namespace:calico-system,Attempt:1,}" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.752 [INFO][4564] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.754 [INFO][4564] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" iface="eth0" netns="/var/run/netns/cni-c13a9d5c-b99c-482b-1b35-421db37759a5" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.755 [INFO][4564] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" iface="eth0" netns="/var/run/netns/cni-c13a9d5c-b99c-482b-1b35-421db37759a5" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.756 [INFO][4564] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" iface="eth0" netns="/var/run/netns/cni-c13a9d5c-b99c-482b-1b35-421db37759a5" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.756 [INFO][4564] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.756 [INFO][4564] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.918 [INFO][4590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.920 [INFO][4590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.934 [INFO][4590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.950 [WARNING][4590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.950 [INFO][4590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.954 [INFO][4590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:49.962846 containerd[1511]: 2026-03-10 01:47:49.958 [INFO][4564] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:47:49.962846 containerd[1511]: time="2026-03-10T01:47:49.961659387Z" level=info msg="TearDown network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" successfully" Mar 10 01:47:49.962846 containerd[1511]: time="2026-03-10T01:47:49.961686217Z" level=info msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" returns successfully" Mar 10 01:47:49.967114 containerd[1511]: time="2026-03-10T01:47:49.967073669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-5rk88,Uid:ed5442cf-d7ee-42ff-87df-7d6e6ec79b47,Namespace:calico-system,Attempt:1,}" Mar 10 01:47:49.970000 systemd[1]: run-netns-cni\x2dc13a9d5c\x2db99c\x2d482b\x2d1b35\x2d421db37759a5.mount: Deactivated successfully. Mar 10 01:47:50.260370 systemd-networkd[1427]: cali6fda48ee21b: Link UP Mar 10 01:47:50.263124 systemd-networkd[1427]: cali6fda48ee21b: Gained carrier Mar 10 01:47:50.286393 kubelet[2643]: I0310 01:47:50.285828 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-c4fffccf9-rdxx6" podStartSLOduration=2.192706451 podStartE2EDuration="9.28580685s" podCreationTimestamp="2026-03-10 01:47:41 +0000 UTC" firstStartedPulling="2026-03-10 01:47:42.152350562 +0000 UTC m=+51.883149609" lastFinishedPulling="2026-03-10 01:47:49.245450961 +0000 UTC m=+58.976250008" observedRunningTime="2026-03-10 01:47:50.113562255 +0000 UTC m=+59.844361319" watchObservedRunningTime="2026-03-10 01:47:50.28580685 +0000 UTC m=+60.016605910" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.062 [INFO][4610] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0 calico-kube-controllers-5b78bbc6cb- calico-system 2c23487f-710c-4788-9a24-3cedb377bd4f 982 0 2026-03-10 01:47:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b78bbc6cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com calico-kube-controllers-5b78bbc6cb-6czx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6fda48ee21b [] [] }} ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.064 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.178 [INFO][4626] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" HandleID="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.198 [INFO][4626] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" HandleID="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"calico-kube-controllers-5b78bbc6cb-6czx4", "timestamp":"2026-03-10 01:47:50.178820963 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c0c60)} Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.198 [INFO][4626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.198 [INFO][4626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.198 [INFO][4626] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.207 [INFO][4626] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.217 [INFO][4626] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.223 [INFO][4626] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.226 [INFO][4626] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.229 [INFO][4626] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.229 [INFO][4626] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.231 [INFO][4626] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.237 [INFO][4626] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.246 [INFO][4626] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.131/26] block=192.168.13.128/26 handle="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.247 [INFO][4626] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.131/26] handle="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.247 [INFO][4626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:50.298887 containerd[1511]: 2026-03-10 01:47:50.248 [INFO][4626] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.131/26] IPv6=[] ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" HandleID="k8s-pod-network.509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.252 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0", GenerateName:"calico-kube-controllers-5b78bbc6cb-", Namespace:"calico-system", SelfLink:"", UID:"2c23487f-710c-4788-9a24-3cedb377bd4f", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b78bbc6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5b78bbc6cb-6czx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fda48ee21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.252 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.131/32] ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.252 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fda48ee21b ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.262 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.267 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0", GenerateName:"calico-kube-controllers-5b78bbc6cb-", Namespace:"calico-system", SelfLink:"", UID:"2c23487f-710c-4788-9a24-3cedb377bd4f", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b78bbc6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae", Pod:"calico-kube-controllers-5b78bbc6cb-6czx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fda48ee21b", MAC:"7e:3a:d6:b6:6f:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:50.301968 containerd[1511]: 2026-03-10 01:47:50.293 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae" Namespace="calico-system" Pod="calico-kube-controllers-5b78bbc6cb-6czx4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:47:50.351551 containerd[1511]: time="2026-03-10T01:47:50.350518799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:50.353559 containerd[1511]: time="2026-03-10T01:47:50.351199805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:50.353559 containerd[1511]: time="2026-03-10T01:47:50.351232153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:50.353559 containerd[1511]: time="2026-03-10T01:47:50.351904130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:50.404212 systemd-networkd[1427]: cali058c66febef: Link UP Mar 10 01:47:50.411017 systemd-networkd[1427]: cali058c66febef: Gained carrier Mar 10 01:47:50.417810 systemd[1]: Started cri-containerd-509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae.scope - libcontainer container 509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae. Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.073 [INFO][4601] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0 calico-apiserver-6d978ccb84- calico-system ed5442cf-d7ee-42ff-87df-7d6e6ec79b47 983 0 2026-03-10 01:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d978ccb84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com calico-apiserver-6d978ccb84-5rk88 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali058c66febef [] [] }} ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.075 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.188 [INFO][4631] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" HandleID="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.205 [INFO][4631] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" HandleID="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e650), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"calico-apiserver-6d978ccb84-5rk88", "timestamp":"2026-03-10 01:47:50.188859562 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040a000)} Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.206 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.247 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.249 [INFO][4631] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.305 [INFO][4631] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.321 [INFO][4631] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.329 [INFO][4631] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.335 [INFO][4631] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.341 [INFO][4631] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.341 [INFO][4631] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.351 [INFO][4631] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.358 [INFO][4631] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.375 [INFO][4631] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.132/26] block=192.168.13.128/26 handle="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.375 [INFO][4631] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.132/26] handle="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.375 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:50.455918 containerd[1511]: 2026-03-10 01:47:50.376 [INFO][4631] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.132/26] IPv6=[] ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" HandleID="k8s-pod-network.49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.389 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6d978ccb84-5rk88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali058c66febef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.389 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.132/32] ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.389 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali058c66febef ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.416 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.419 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c", Pod:"calico-apiserver-6d978ccb84-5rk88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali058c66febef", MAC:"16:98:3b:0a:9a:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:50.458953 containerd[1511]: 2026-03-10 01:47:50.442 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-5rk88" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:47:50.519951 containerd[1511]: time="2026-03-10T01:47:50.519788943Z" level=info msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" Mar 10 01:47:50.580084 containerd[1511]: time="2026-03-10T01:47:50.578874410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:50.580084 containerd[1511]: time="2026-03-10T01:47:50.578968507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:50.580084 containerd[1511]: time="2026-03-10T01:47:50.578988509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:50.580084 containerd[1511]: time="2026-03-10T01:47:50.579104758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:50.635835 containerd[1511]: time="2026-03-10T01:47:50.634775924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b78bbc6cb-6czx4,Uid:2c23487f-710c-4788-9a24-3cedb377bd4f,Namespace:calico-system,Attempt:1,} returns sandbox id \"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae\"" Mar 10 01:47:50.641172 containerd[1511]: time="2026-03-10T01:47:50.641143083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 01:47:50.655739 systemd[1]: Started cri-containerd-49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c.scope - libcontainer container 49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c. Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.636 [WARNING][4716] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.638 [INFO][4716] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.638 [INFO][4716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" iface="eth0" netns="" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.638 [INFO][4716] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.638 [INFO][4716] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.706 [INFO][4749] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.707 [INFO][4749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.707 [INFO][4749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.736 [WARNING][4749] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.736 [INFO][4749] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.739 [INFO][4749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:50.742475 containerd[1511]: 2026-03-10 01:47:50.740 [INFO][4716] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.743244 containerd[1511]: time="2026-03-10T01:47:50.742637107Z" level=info msg="TearDown network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" successfully" Mar 10 01:47:50.743244 containerd[1511]: time="2026-03-10T01:47:50.742706777Z" level=info msg="StopPodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" returns successfully" Mar 10 01:47:50.750050 containerd[1511]: time="2026-03-10T01:47:50.749990392Z" level=info msg="RemovePodSandbox for \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" Mar 10 01:47:50.755950 containerd[1511]: time="2026-03-10T01:47:50.755434851Z" level=info msg="Forcibly stopping sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\"" Mar 10 01:47:50.793418 containerd[1511]: time="2026-03-10T01:47:50.791834960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-5rk88,Uid:ed5442cf-d7ee-42ff-87df-7d6e6ec79b47,Namespace:calico-system,Attempt:1,} returns sandbox id \"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c\"" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.836 [WARNING][4779] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.837 [INFO][4779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.837 [INFO][4779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" iface="eth0" netns="" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.837 [INFO][4779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.837 [INFO][4779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.881 [INFO][4791] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.882 [INFO][4791] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.882 [INFO][4791] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.896 [WARNING][4791] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.897 [INFO][4791] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" HandleID="k8s-pod-network.e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Workload="srv--p0r5l.gb1.brightbox.com-k8s-whisker--c65d7f56b--799kz-eth0" Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.900 [INFO][4791] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:50.906734 containerd[1511]: 2026-03-10 01:47:50.904 [INFO][4779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a" Mar 10 01:47:50.907369 containerd[1511]: time="2026-03-10T01:47:50.906772366Z" level=info msg="TearDown network for sandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" successfully" Mar 10 01:47:50.915859 containerd[1511]: time="2026-03-10T01:47:50.915705863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:47:50.915859 containerd[1511]: time="2026-03-10T01:47:50.916084598Z" level=info msg="RemovePodSandbox \"e3f6a1b2812ab7d1185b9a1b6e8e7bc35bcbc51832e0015729769361b5545c1a\" returns successfully" Mar 10 01:47:50.917283 containerd[1511]: time="2026-03-10T01:47:50.917173283Z" level=info msg="StopPodSandbox for \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\"" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:50.977 [WARNING][4809] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4dccb67c-d463-4240-ae02-985ee84c0680", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f", Pod:"csi-node-driver-vlwxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2f3959bc67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:50.977 [INFO][4809] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:50.977 [INFO][4809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" iface="eth0" netns="" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:50.977 [INFO][4809] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:50.977 [INFO][4809] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.016 [INFO][4816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.016 [INFO][4816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.016 [INFO][4816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.026 [WARNING][4816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.027 [INFO][4816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.029 [INFO][4816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:51.033506 containerd[1511]: 2026-03-10 01:47:51.031 [INFO][4809] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.036121 containerd[1511]: time="2026-03-10T01:47:51.034644527Z" level=info msg="TearDown network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" successfully" Mar 10 01:47:51.036121 containerd[1511]: time="2026-03-10T01:47:51.034699120Z" level=info msg="StopPodSandbox for \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" returns successfully" Mar 10 01:47:51.036121 containerd[1511]: time="2026-03-10T01:47:51.035675073Z" level=info msg="RemovePodSandbox for \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\"" Mar 10 01:47:51.036121 containerd[1511]: time="2026-03-10T01:47:51.035733553Z" level=info msg="Forcibly stopping sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\"" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.099 [WARNING][4831] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4dccb67c-d463-4240-ae02-985ee84c0680", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"46bfa98640bdcbf016d3b879b851811417ea5910edc80cff7f92472c50e6533f", Pod:"csi-node-driver-vlwxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2f3959bc67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.102 [INFO][4831] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.102 [INFO][4831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" iface="eth0" netns="" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.102 [INFO][4831] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.102 [INFO][4831] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.143 [INFO][4838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.143 [INFO][4838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.143 [INFO][4838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.151 [WARNING][4838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.151 [INFO][4838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" HandleID="k8s-pod-network.e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Workload="srv--p0r5l.gb1.brightbox.com-k8s-csi--node--driver--vlwxn-eth0" Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.153 [INFO][4838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:51.157991 containerd[1511]: 2026-03-10 01:47:51.155 [INFO][4831] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10" Mar 10 01:47:51.157991 containerd[1511]: time="2026-03-10T01:47:51.157870933Z" level=info msg="TearDown network for sandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" successfully" Mar 10 01:47:51.161975 containerd[1511]: time="2026-03-10T01:47:51.161936011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:47:51.162086 containerd[1511]: time="2026-03-10T01:47:51.162004340Z" level=info msg="RemovePodSandbox \"e08f79a0a9f6955e211fac390866afde3a716a415cd173f55c0e64fd2ca41e10\" returns successfully" Mar 10 01:47:51.442798 systemd-networkd[1427]: cali058c66febef: Gained IPv6LL Mar 10 01:47:51.493203 containerd[1511]: time="2026-03-10T01:47:51.493143516Z" level=info msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.559 [INFO][4858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.559 [INFO][4858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" iface="eth0" netns="/var/run/netns/cni-ee657ea7-fb5f-a549-b893-16759224821a" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.560 [INFO][4858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" iface="eth0" netns="/var/run/netns/cni-ee657ea7-fb5f-a549-b893-16759224821a" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.560 [INFO][4858] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" iface="eth0" netns="/var/run/netns/cni-ee657ea7-fb5f-a549-b893-16759224821a" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.560 [INFO][4858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.560 [INFO][4858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.595 [INFO][4865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.595 [INFO][4865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.595 [INFO][4865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.606 [WARNING][4865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.606 [INFO][4865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.608 [INFO][4865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:51.611920 containerd[1511]: 2026-03-10 01:47:51.610 [INFO][4858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:47:51.612998 containerd[1511]: time="2026-03-10T01:47:51.612173761Z" level=info msg="TearDown network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" successfully" Mar 10 01:47:51.612998 containerd[1511]: time="2026-03-10T01:47:51.612232261Z" level=info msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" returns successfully" Mar 10 01:47:51.617102 containerd[1511]: time="2026-03-10T01:47:51.617039799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4jtv,Uid:ab365807-38ad-4172-ae3a-3060d423ffa4,Namespace:kube-system,Attempt:1,}" Mar 10 01:47:51.617929 systemd[1]: run-netns-cni\x2dee657ea7\x2dfb5f\x2da549\x2db893\x2d16759224821a.mount: Deactivated successfully. Mar 10 01:47:51.814353 systemd-networkd[1427]: calibbe5febc3da: Link UP Mar 10 01:47:51.814982 systemd-networkd[1427]: calibbe5febc3da: Gained carrier Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.710 [INFO][4872] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0 coredns-7d764666f9- kube-system ab365807-38ad-4172-ae3a-3060d423ffa4 1001 0 2026-03-10 01:46:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com coredns-7d764666f9-s4jtv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbe5febc3da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.710 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.758 [INFO][4885] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" HandleID="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.768 [INFO][4885] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" HandleID="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276560), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"coredns-7d764666f9-s4jtv", "timestamp":"2026-03-10 01:47:51.75880873 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115080)} Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.768 [INFO][4885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.768 [INFO][4885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.768 [INFO][4885] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.771 [INFO][4885] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.778 [INFO][4885] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.784 [INFO][4885] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.787 [INFO][4885] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.791 [INFO][4885] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.791 [INFO][4885] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.794 [INFO][4885] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80 Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.799 [INFO][4885] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.807 [INFO][4885] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.133/26] block=192.168.13.128/26 handle="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.807 [INFO][4885] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.133/26] handle="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.807 [INFO][4885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:51.839559 containerd[1511]: 2026-03-10 01:47:51.807 [INFO][4885] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.133/26] IPv6=[] ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" HandleID="k8s-pod-network.251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.841574 containerd[1511]: 2026-03-10 01:47:51.810 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"ab365807-38ad-4172-ae3a-3060d423ffa4", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-s4jtv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5febc3da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:51.841574 containerd[1511]: 2026-03-10 01:47:51.810 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.133/32] ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.841574 containerd[1511]: 2026-03-10 01:47:51.810 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbe5febc3da ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.841574 containerd[1511]: 2026-03-10 01:47:51.816 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.841574 containerd[1511]: 2026-03-10 01:47:51.818 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"ab365807-38ad-4172-ae3a-3060d423ffa4", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80", Pod:"coredns-7d764666f9-s4jtv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5febc3da", MAC:"32:2d:58:2f:9f:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:51.843038 containerd[1511]: 2026-03-10 01:47:51.834 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80" Namespace="kube-system" Pod="coredns-7d764666f9-s4jtv" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:47:51.881046 containerd[1511]: time="2026-03-10T01:47:51.880615304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:51.881046 containerd[1511]: time="2026-03-10T01:47:51.880746800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:51.881046 containerd[1511]: time="2026-03-10T01:47:51.880795255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:51.881046 containerd[1511]: time="2026-03-10T01:47:51.880968496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:51.924131 systemd[1]: Started cri-containerd-251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80.scope - libcontainer container 251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80. Mar 10 01:47:52.005121 containerd[1511]: time="2026-03-10T01:47:52.005051875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4jtv,Uid:ab365807-38ad-4172-ae3a-3060d423ffa4,Namespace:kube-system,Attempt:1,} returns sandbox id \"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80\"" Mar 10 01:47:52.015582 containerd[1511]: time="2026-03-10T01:47:52.015485712Z" level=info msg="CreateContainer within sandbox \"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:47:52.053913 containerd[1511]: time="2026-03-10T01:47:52.052738136Z" level=info msg="CreateContainer within sandbox \"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5021a48cc1138a0693e014e641a27b051868706747a6bdf7ed5a9a8c874a019b\"" Mar 10 01:47:52.057609 containerd[1511]: time="2026-03-10T01:47:52.055921144Z" level=info msg="StartContainer for \"5021a48cc1138a0693e014e641a27b051868706747a6bdf7ed5a9a8c874a019b\"" Mar 10 01:47:52.064910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628321894.mount: Deactivated successfully. Mar 10 01:47:52.100735 systemd[1]: Started cri-containerd-5021a48cc1138a0693e014e641a27b051868706747a6bdf7ed5a9a8c874a019b.scope - libcontainer container 5021a48cc1138a0693e014e641a27b051868706747a6bdf7ed5a9a8c874a019b. Mar 10 01:47:52.161989 containerd[1511]: time="2026-03-10T01:47:52.161863771Z" level=info msg="StartContainer for \"5021a48cc1138a0693e014e641a27b051868706747a6bdf7ed5a9a8c874a019b\" returns successfully" Mar 10 01:47:52.274948 systemd-networkd[1427]: cali6fda48ee21b: Gained IPv6LL Mar 10 01:47:52.513227 containerd[1511]: time="2026-03-10T01:47:52.512667536Z" level=info msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.616 [INFO][5005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.616 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" iface="eth0" netns="/var/run/netns/cni-aacd16ff-8e5b-706b-c2b1-204a0d5e0ef0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.617 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" iface="eth0" netns="/var/run/netns/cni-aacd16ff-8e5b-706b-c2b1-204a0d5e0ef0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.617 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" iface="eth0" netns="/var/run/netns/cni-aacd16ff-8e5b-706b-c2b1-204a0d5e0ef0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.617 [INFO][5005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.617 [INFO][5005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.654 [INFO][5014] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.654 [INFO][5014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.654 [INFO][5014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.664 [WARNING][5014] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.665 [INFO][5014] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.666 [INFO][5014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:52.670871 containerd[1511]: 2026-03-10 01:47:52.668 [INFO][5005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:47:52.672764 containerd[1511]: time="2026-03-10T01:47:52.671103411Z" level=info msg="TearDown network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" successfully" Mar 10 01:47:52.672764 containerd[1511]: time="2026-03-10T01:47:52.671150901Z" level=info msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" returns successfully" Mar 10 01:47:52.675098 containerd[1511]: time="2026-03-10T01:47:52.675067797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nsjn4,Uid:6e988b93-f8c5-467f-9c4d-91992323f92f,Namespace:kube-system,Attempt:1,}" Mar 10 01:47:52.855942 systemd-networkd[1427]: cali1ca89dde9ec: Link UP Mar 10 01:47:52.858832 systemd-networkd[1427]: cali1ca89dde9ec: Gained carrier Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.734 [INFO][5021] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0 coredns-7d764666f9- kube-system 6e988b93-f8c5-467f-9c4d-91992323f92f 1012 0 2026-03-10 01:46:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com coredns-7d764666f9-nsjn4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ca89dde9ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.735 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.786 [INFO][5032] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" HandleID="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.797 [INFO][5032] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" HandleID="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"coredns-7d764666f9-nsjn4", "timestamp":"2026-03-10 01:47:52.786854475 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000220580)} Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.797 [INFO][5032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.797 [INFO][5032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.797 [INFO][5032] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.800 [INFO][5032] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.812 [INFO][5032] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.819 [INFO][5032] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.822 [INFO][5032] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.825 [INFO][5032] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.826 [INFO][5032] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.829 [INFO][5032] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179 Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.836 [INFO][5032] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.846 [INFO][5032] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.134/26] block=192.168.13.128/26 handle="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.846 [INFO][5032] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.134/26] handle="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.847 [INFO][5032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:52.885481 containerd[1511]: 2026-03-10 01:47:52.847 [INFO][5032] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.134/26] IPv6=[] ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" HandleID="k8s-pod-network.ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.889919 containerd[1511]: 2026-03-10 01:47:52.850 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6e988b93-f8c5-467f-9c4d-91992323f92f", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-nsjn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ca89dde9ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:52.889919 containerd[1511]: 2026-03-10 01:47:52.850 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.134/32] ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.889919 containerd[1511]: 2026-03-10 01:47:52.850 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ca89dde9ec ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.889919 containerd[1511]: 2026-03-10 01:47:52.859 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.889919 containerd[1511]: 2026-03-10 01:47:52.861 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6e988b93-f8c5-467f-9c4d-91992323f92f", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179", Pod:"coredns-7d764666f9-nsjn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ca89dde9ec", MAC:"56:c4:65:98:f4:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:52.890631 containerd[1511]: 2026-03-10 01:47:52.874 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179" Namespace="kube-system" Pod="coredns-7d764666f9-nsjn4" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:47:52.950816 systemd[1]: run-netns-cni\x2daacd16ff\x2d8e5b\x2d706b\x2dc2b1\x2d204a0d5e0ef0.mount: Deactivated successfully. Mar 10 01:47:52.975102 containerd[1511]: time="2026-03-10T01:47:52.974861046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:52.975660 containerd[1511]: time="2026-03-10T01:47:52.975361284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:52.975660 containerd[1511]: time="2026-03-10T01:47:52.975461974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:52.977570 containerd[1511]: time="2026-03-10T01:47:52.976139788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:53.088752 systemd[1]: Started cri-containerd-ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179.scope - libcontainer container ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179. Mar 10 01:47:53.213000 kubelet[2643]: I0310 01:47:53.210568 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-s4jtv" podStartSLOduration=57.210533873 podStartE2EDuration="57.210533873s" podCreationTimestamp="2026-03-10 01:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:47:53.165715901 +0000 UTC m=+62.896514981" watchObservedRunningTime="2026-03-10 01:47:53.210533873 +0000 UTC m=+62.941332924" Mar 10 01:47:53.240982 containerd[1511]: time="2026-03-10T01:47:53.240757189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nsjn4,Uid:6e988b93-f8c5-467f-9c4d-91992323f92f,Namespace:kube-system,Attempt:1,} returns sandbox id \"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179\"" Mar 10 01:47:53.266338 containerd[1511]: time="2026-03-10T01:47:53.266277807Z" level=info msg="CreateContainer within sandbox \"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:47:53.291672 containerd[1511]: time="2026-03-10T01:47:53.289995148Z" level=info msg="CreateContainer within sandbox \"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fe61adbce26d2c12a822803253f853b78e985e8b0fa16aabf43960a9a1828c5\"" Mar 10 01:47:53.297734 containerd[1511]: time="2026-03-10T01:47:53.293198893Z" level=info msg="StartContainer for \"4fe61adbce26d2c12a822803253f853b78e985e8b0fa16aabf43960a9a1828c5\"" Mar 10 01:47:53.347749 systemd[1]: Started cri-containerd-4fe61adbce26d2c12a822803253f853b78e985e8b0fa16aabf43960a9a1828c5.scope - libcontainer container 4fe61adbce26d2c12a822803253f853b78e985e8b0fa16aabf43960a9a1828c5. Mar 10 01:47:53.397840 containerd[1511]: time="2026-03-10T01:47:53.397791680Z" level=info msg="StartContainer for \"4fe61adbce26d2c12a822803253f853b78e985e8b0fa16aabf43960a9a1828c5\" returns successfully" Mar 10 01:47:53.495279 containerd[1511]: time="2026-03-10T01:47:53.495135917Z" level=info msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.595 [INFO][5153] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.595 [INFO][5153] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" iface="eth0" netns="/var/run/netns/cni-6e96536a-1ffe-26ab-9aff-56d23448bf8d" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.596 [INFO][5153] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" iface="eth0" netns="/var/run/netns/cni-6e96536a-1ffe-26ab-9aff-56d23448bf8d" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.598 [INFO][5153] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" iface="eth0" netns="/var/run/netns/cni-6e96536a-1ffe-26ab-9aff-56d23448bf8d" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.598 [INFO][5153] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.598 [INFO][5153] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.677 [INFO][5161] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.677 [INFO][5161] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.677 [INFO][5161] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.689 [WARNING][5161] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.689 [INFO][5161] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.691 [INFO][5161] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:53.696246 containerd[1511]: 2026-03-10 01:47:53.693 [INFO][5153] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:47:53.698089 containerd[1511]: time="2026-03-10T01:47:53.697242596Z" level=info msg="TearDown network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" successfully" Mar 10 01:47:53.698089 containerd[1511]: time="2026-03-10T01:47:53.697288414Z" level=info msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" returns successfully" Mar 10 01:47:53.721873 containerd[1511]: time="2026-03-10T01:47:53.721829220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gcmtp,Uid:940d2b11-362a-4d5f-8ab9-dd1a5a2dba04,Namespace:calico-system,Attempt:1,}" Mar 10 01:47:53.813161 systemd-networkd[1427]: calibbe5febc3da: Gained IPv6LL Mar 10 01:47:53.919591 systemd-networkd[1427]: cali0c427e6c520: Link UP Mar 10 01:47:53.921667 systemd-networkd[1427]: cali0c427e6c520: Gained carrier Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.789 [INFO][5167] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0 goldmane-9f7667bb8- calico-system 940d2b11-362a-4d5f-8ab9-dd1a5a2dba04 1033 0 2026-03-10 01:47:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com goldmane-9f7667bb8-gcmtp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0c427e6c520 [] [] }} ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.789 [INFO][5167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.850 [INFO][5180] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" HandleID="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.865 [INFO][5180] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" HandleID="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309db0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"goldmane-9f7667bb8-gcmtp", "timestamp":"2026-03-10 01:47:53.850460096 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114dc0)} Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.865 [INFO][5180] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.865 [INFO][5180] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.865 [INFO][5180] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.869 [INFO][5180] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.875 [INFO][5180] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.882 [INFO][5180] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.885 [INFO][5180] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.889 [INFO][5180] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.889 [INFO][5180] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.892 [INFO][5180] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5 Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.900 [INFO][5180] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.909 [INFO][5180] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.135/26] block=192.168.13.128/26 handle="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.909 [INFO][5180] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.135/26] handle="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.909 [INFO][5180] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:53.951816 containerd[1511]: 2026-03-10 01:47:53.909 [INFO][5180] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.135/26] IPv6=[] ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" HandleID="k8s-pod-network.ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.912 [INFO][5167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-9f7667bb8-gcmtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c427e6c520", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.913 [INFO][5167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.135/32] ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.913 [INFO][5167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c427e6c520 ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.922 [INFO][5167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.922 [INFO][5167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5", Pod:"goldmane-9f7667bb8-gcmtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c427e6c520", MAC:"76:e4:b9:fe:c3:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:53.956481 containerd[1511]: 2026-03-10 01:47:53.937 [INFO][5167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5" Namespace="calico-system" Pod="goldmane-9f7667bb8-gcmtp" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:47:53.959678 systemd[1]: run-netns-cni\x2d6e96536a\x2d1ffe\x2d26ab\x2d9aff\x2d56d23448bf8d.mount: Deactivated successfully. Mar 10 01:47:53.998444 containerd[1511]: time="2026-03-10T01:47:53.998254426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:53.998444 containerd[1511]: time="2026-03-10T01:47:53.998368664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:53.998444 containerd[1511]: time="2026-03-10T01:47:53.998398691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:53.999228 containerd[1511]: time="2026-03-10T01:47:53.998578282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:54.052828 systemd[1]: Started cri-containerd-ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5.scope - libcontainer container ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5. Mar 10 01:47:54.180284 kubelet[2643]: I0310 01:47:54.180207 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nsjn4" podStartSLOduration=58.180190903 podStartE2EDuration="58.180190903s" podCreationTimestamp="2026-03-10 01:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:47:54.174938219 +0000 UTC m=+63.905737283" watchObservedRunningTime="2026-03-10 01:47:54.180190903 +0000 UTC m=+63.910989960" Mar 10 01:47:54.205329 containerd[1511]: time="2026-03-10T01:47:54.204833708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gcmtp,Uid:940d2b11-362a-4d5f-8ab9-dd1a5a2dba04,Namespace:calico-system,Attempt:1,} returns sandbox id \"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5\"" Mar 10 01:47:54.496620 containerd[1511]: time="2026-03-10T01:47:54.495721126Z" level=info msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" Mar 10 01:47:54.515648 systemd-networkd[1427]: cali1ca89dde9ec: Gained IPv6LL Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.601 [INFO][5264] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.601 [INFO][5264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" iface="eth0" netns="/var/run/netns/cni-ed499d4e-9136-8155-a372-45324cabfd3f" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.602 [INFO][5264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" iface="eth0" netns="/var/run/netns/cni-ed499d4e-9136-8155-a372-45324cabfd3f" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.603 [INFO][5264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" iface="eth0" netns="/var/run/netns/cni-ed499d4e-9136-8155-a372-45324cabfd3f" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.603 [INFO][5264] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.603 [INFO][5264] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.669 [INFO][5275] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.670 [INFO][5275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.670 [INFO][5275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.688 [WARNING][5275] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.688 [INFO][5275] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.692 [INFO][5275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:54.699646 containerd[1511]: 2026-03-10 01:47:54.696 [INFO][5264] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:47:54.704067 containerd[1511]: time="2026-03-10T01:47:54.702356932Z" level=info msg="TearDown network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" successfully" Mar 10 01:47:54.704067 containerd[1511]: time="2026-03-10T01:47:54.702400516Z" level=info msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" returns successfully" Mar 10 01:47:54.705688 systemd[1]: run-netns-cni\x2ded499d4e\x2d9136\x2d8155\x2da372\x2d45324cabfd3f.mount: Deactivated successfully. Mar 10 01:47:54.728811 containerd[1511]: time="2026-03-10T01:47:54.728227882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-c5q29,Uid:210382ca-0415-41bb-ab2d-47423643647d,Namespace:calico-system,Attempt:1,}" Mar 10 01:47:54.929314 systemd-networkd[1427]: calib2692df727e: Link UP Mar 10 01:47:54.932302 systemd-networkd[1427]: calib2692df727e: Gained carrier Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.798 [INFO][5291] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0 calico-apiserver-6d978ccb84- calico-system 210382ca-0415-41bb-ab2d-47423643647d 1043 0 2026-03-10 01:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d978ccb84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p0r5l.gb1.brightbox.com calico-apiserver-6d978ccb84-c5q29 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib2692df727e [] [] }} ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.798 [INFO][5291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.852 [INFO][5303] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" HandleID="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.866 [INFO][5303] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" HandleID="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277350), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0r5l.gb1.brightbox.com", "pod":"calico-apiserver-6d978ccb84-c5q29", "timestamp":"2026-03-10 01:47:54.852077491 +0000 UTC"}, Hostname:"srv-p0r5l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00026f1e0)} Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.866 [INFO][5303] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.866 [INFO][5303] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.867 [INFO][5303] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0r5l.gb1.brightbox.com' Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.870 [INFO][5303] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.881 [INFO][5303] ipam/ipam.go 409: Looking up existing affinities for host host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.888 [INFO][5303] ipam/ipam.go 526: Trying affinity for 192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.894 [INFO][5303] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.898 [INFO][5303] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.128/26 host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.898 [INFO][5303] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.128/26 handle="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.901 [INFO][5303] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.907 [INFO][5303] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.128/26 handle="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.918 [INFO][5303] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.136/26] block=192.168.13.128/26 handle="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.918 [INFO][5303] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.136/26] handle="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" host="srv-p0r5l.gb1.brightbox.com" Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.919 [INFO][5303] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:47:54.969783 containerd[1511]: 2026-03-10 01:47:54.919 [INFO][5303] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.136/26] IPv6=[] ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" HandleID="k8s-pod-network.0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.923 [INFO][5291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"210382ca-0415-41bb-ab2d-47423643647d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6d978ccb84-c5q29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2692df727e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.923 [INFO][5291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.136/32] ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.923 [INFO][5291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2692df727e ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.934 [INFO][5291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.940 [INFO][5291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"210382ca-0415-41bb-ab2d-47423643647d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e", Pod:"calico-apiserver-6d978ccb84-c5q29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2692df727e", MAC:"72:6e:63:7b:2d:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:47:54.974000 containerd[1511]: 2026-03-10 01:47:54.964 [INFO][5291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e" Namespace="calico-system" Pod="calico-apiserver-6d978ccb84-c5q29" WorkloadEndpoint="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:47:55.026054 containerd[1511]: time="2026-03-10T01:47:55.020028939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:47:55.026054 containerd[1511]: time="2026-03-10T01:47:55.020094669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:47:55.026054 containerd[1511]: time="2026-03-10T01:47:55.020142895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:55.026054 containerd[1511]: time="2026-03-10T01:47:55.020305140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:47:55.080723 systemd[1]: Started cri-containerd-0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e.scope - libcontainer container 0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e. Mar 10 01:47:55.243656 containerd[1511]: time="2026-03-10T01:47:55.243450728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d978ccb84-c5q29,Uid:210382ca-0415-41bb-ab2d-47423643647d,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e\"" Mar 10 01:47:55.474857 systemd-networkd[1427]: cali0c427e6c520: Gained IPv6LL Mar 10 01:47:56.370767 systemd-networkd[1427]: calib2692df727e: Gained IPv6LL Mar 10 01:47:59.307022 containerd[1511]: time="2026-03-10T01:47:59.306816279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:59.309287 containerd[1511]: time="2026-03-10T01:47:59.309245668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 10 01:47:59.309932 containerd[1511]: time="2026-03-10T01:47:59.309885810Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:59.323175 containerd[1511]: time="2026-03-10T01:47:59.323087771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:47:59.324676 containerd[1511]: time="2026-03-10T01:47:59.324454751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 8.683137605s" Mar 10 01:47:59.324676 containerd[1511]: time="2026-03-10T01:47:59.324510264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 10 01:47:59.326871 containerd[1511]: time="2026-03-10T01:47:59.326666026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:47:59.423751 containerd[1511]: time="2026-03-10T01:47:59.423706308Z" level=info msg="CreateContainer within sandbox \"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 01:47:59.442947 containerd[1511]: time="2026-03-10T01:47:59.442894553Z" level=info msg="CreateContainer within sandbox \"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a22470cb994ee82b238689d09a3394ab9200d4d25981f47f15124835853493c2\"" Mar 10 01:47:59.444617 containerd[1511]: time="2026-03-10T01:47:59.443812195Z" level=info msg="StartContainer for \"a22470cb994ee82b238689d09a3394ab9200d4d25981f47f15124835853493c2\"" Mar 10 01:47:59.491750 systemd[1]: Started cri-containerd-a22470cb994ee82b238689d09a3394ab9200d4d25981f47f15124835853493c2.scope - libcontainer container a22470cb994ee82b238689d09a3394ab9200d4d25981f47f15124835853493c2. Mar 10 01:47:59.566990 containerd[1511]: time="2026-03-10T01:47:59.566863370Z" level=info msg="StartContainer for \"a22470cb994ee82b238689d09a3394ab9200d4d25981f47f15124835853493c2\" returns successfully" Mar 10 01:48:00.275414 kubelet[2643]: I0310 01:48:00.271300 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b78bbc6cb-6czx4" podStartSLOduration=39.575756827 podStartE2EDuration="48.262347541s" podCreationTimestamp="2026-03-10 01:47:12 +0000 UTC" firstStartedPulling="2026-03-10 01:47:50.639759246 +0000 UTC m=+60.370558290" lastFinishedPulling="2026-03-10 01:47:59.32634996 +0000 UTC m=+69.057149004" observedRunningTime="2026-03-10 01:48:00.260062252 +0000 UTC m=+69.990861314" watchObservedRunningTime="2026-03-10 01:48:00.262347541 +0000 UTC m=+69.993146599" Mar 10 01:48:03.517358 containerd[1511]: time="2026-03-10T01:48:03.517300117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:03.519293 containerd[1511]: time="2026-03-10T01:48:03.518735056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 10 01:48:03.520080 containerd[1511]: time="2026-03-10T01:48:03.520013265Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:03.523553 containerd[1511]: time="2026-03-10T01:48:03.523184419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:03.526146 containerd[1511]: time="2026-03-10T01:48:03.526092418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.199385954s" Mar 10 01:48:03.526251 containerd[1511]: time="2026-03-10T01:48:03.526163559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:48:03.533788 containerd[1511]: time="2026-03-10T01:48:03.533685055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 01:48:03.568575 containerd[1511]: time="2026-03-10T01:48:03.568431578Z" level=info msg="CreateContainer within sandbox \"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:48:03.621626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023845716.mount: Deactivated successfully. Mar 10 01:48:03.624845 containerd[1511]: time="2026-03-10T01:48:03.624640171Z" level=info msg="CreateContainer within sandbox \"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4af0ac22cd85bdf814ea989fd51a2890415d4221a2f68b3a69b35239193c7e2d\"" Mar 10 01:48:03.625802 containerd[1511]: time="2026-03-10T01:48:03.625765390Z" level=info msg="StartContainer for \"4af0ac22cd85bdf814ea989fd51a2890415d4221a2f68b3a69b35239193c7e2d\"" Mar 10 01:48:03.730797 systemd[1]: Started cri-containerd-4af0ac22cd85bdf814ea989fd51a2890415d4221a2f68b3a69b35239193c7e2d.scope - libcontainer container 4af0ac22cd85bdf814ea989fd51a2890415d4221a2f68b3a69b35239193c7e2d. Mar 10 01:48:03.798388 containerd[1511]: time="2026-03-10T01:48:03.798191624Z" level=info msg="StartContainer for \"4af0ac22cd85bdf814ea989fd51a2890415d4221a2f68b3a69b35239193c7e2d\" returns successfully" Mar 10 01:48:05.307740 kubelet[2643]: I0310 01:48:05.303721 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:48:06.564158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901791677.mount: Deactivated successfully. Mar 10 01:48:07.244163 containerd[1511]: time="2026-03-10T01:48:07.244097052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 10 01:48:07.252395 containerd[1511]: time="2026-03-10T01:48:07.251626661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:07.253563 containerd[1511]: time="2026-03-10T01:48:07.253384040Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:07.255920 containerd[1511]: time="2026-03-10T01:48:07.254622450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.720882354s" Mar 10 01:48:07.255920 containerd[1511]: time="2026-03-10T01:48:07.254682444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 10 01:48:07.255920 containerd[1511]: time="2026-03-10T01:48:07.255423503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:07.270677 containerd[1511]: time="2026-03-10T01:48:07.270506611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:48:07.318877 containerd[1511]: time="2026-03-10T01:48:07.318762843Z" level=info msg="CreateContainer within sandbox \"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 01:48:07.355891 containerd[1511]: time="2026-03-10T01:48:07.355824739Z" level=info msg="CreateContainer within sandbox \"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca\"" Mar 10 01:48:07.357555 containerd[1511]: time="2026-03-10T01:48:07.356975683Z" level=info msg="StartContainer for \"b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca\"" Mar 10 01:48:07.444743 systemd[1]: Started cri-containerd-b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca.scope - libcontainer container b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca. Mar 10 01:48:07.520930 containerd[1511]: time="2026-03-10T01:48:07.519760748Z" level=info msg="StartContainer for \"b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca\" returns successfully" Mar 10 01:48:07.628851 containerd[1511]: time="2026-03-10T01:48:07.628743410Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:48:07.629945 containerd[1511]: time="2026-03-10T01:48:07.629866838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 01:48:07.633840 containerd[1511]: time="2026-03-10T01:48:07.633686659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 363.094843ms" Mar 10 01:48:07.633840 containerd[1511]: time="2026-03-10T01:48:07.633753240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:48:07.642683 containerd[1511]: time="2026-03-10T01:48:07.642297038Z" level=info msg="CreateContainer within sandbox \"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:48:07.666966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779205298.mount: Deactivated successfully. Mar 10 01:48:07.674439 containerd[1511]: time="2026-03-10T01:48:07.674387557Z" level=info msg="CreateContainer within sandbox \"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28329fa699e9391e36b98d40c3360261d07e0d3285fa076c074ba7ded28b99cf\"" Mar 10 01:48:07.675448 containerd[1511]: time="2026-03-10T01:48:07.675419362Z" level=info msg="StartContainer for \"28329fa699e9391e36b98d40c3360261d07e0d3285fa076c074ba7ded28b99cf\"" Mar 10 01:48:07.721833 systemd[1]: Started cri-containerd-28329fa699e9391e36b98d40c3360261d07e0d3285fa076c074ba7ded28b99cf.scope - libcontainer container 28329fa699e9391e36b98d40c3360261d07e0d3285fa076c074ba7ded28b99cf. Mar 10 01:48:07.786081 containerd[1511]: time="2026-03-10T01:48:07.785848331Z" level=info msg="StartContainer for \"28329fa699e9391e36b98d40c3360261d07e0d3285fa076c074ba7ded28b99cf\" returns successfully" Mar 10 01:48:08.592334 kubelet[2643]: I0310 01:48:08.564491 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d978ccb84-c5q29" podStartSLOduration=46.18369745 podStartE2EDuration="58.564461387s" podCreationTimestamp="2026-03-10 01:47:10 +0000 UTC" firstStartedPulling="2026-03-10 01:47:55.254256159 +0000 UTC m=+64.985055210" lastFinishedPulling="2026-03-10 01:48:07.635020093 +0000 UTC m=+77.365819147" observedRunningTime="2026-03-10 01:48:08.488768725 +0000 UTC m=+78.219567789" watchObservedRunningTime="2026-03-10 01:48:08.564461387 +0000 UTC m=+78.295260446" Mar 10 01:48:08.592334 kubelet[2643]: I0310 01:48:08.590386 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d978ccb84-5rk88" podStartSLOduration=45.856471469 podStartE2EDuration="58.590367264s" podCreationTimestamp="2026-03-10 01:47:10 +0000 UTC" firstStartedPulling="2026-03-10 01:47:50.797073982 +0000 UTC m=+60.527873033" lastFinishedPulling="2026-03-10 01:48:03.530969784 +0000 UTC m=+73.261768828" observedRunningTime="2026-03-10 01:48:04.2965803 +0000 UTC m=+74.027379363" watchObservedRunningTime="2026-03-10 01:48:08.590367264 +0000 UTC m=+78.321166317" Mar 10 01:48:08.592334 kubelet[2643]: I0310 01:48:08.591121 2643 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-gcmtp" podStartSLOduration=44.532967488 podStartE2EDuration="57.59111378s" podCreationTimestamp="2026-03-10 01:47:11 +0000 UTC" firstStartedPulling="2026-03-10 01:47:54.21201195 +0000 UTC m=+63.942811002" lastFinishedPulling="2026-03-10 01:48:07.270158249 +0000 UTC m=+77.000957294" observedRunningTime="2026-03-10 01:48:08.562791524 +0000 UTC m=+78.293590597" watchObservedRunningTime="2026-03-10 01:48:08.59111378 +0000 UTC m=+78.321912841" Mar 10 01:48:09.081503 systemd[1]: Started sshd@8-10.230.66.170:22-159.65.30.95:44784.service - OpenSSH per-connection server daemon (159.65.30.95:44784). Mar 10 01:48:09.320821 sshd[5641]: Connection closed by authenticating user root 159.65.30.95 port 44784 [preauth] Mar 10 01:48:09.323684 systemd[1]: sshd@8-10.230.66.170:22-159.65.30.95:44784.service: Deactivated successfully. Mar 10 01:48:09.360920 kubelet[2643]: I0310 01:48:09.360482 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:48:15.181914 kubelet[2643]: I0310 01:48:15.181865 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:48:18.939349 systemd[1]: Started sshd@9-10.230.66.170:22-68.220.241.50:48398.service - OpenSSH per-connection server daemon (68.220.241.50:48398). Mar 10 01:48:19.600837 sshd[5720]: Accepted publickey for core from 68.220.241.50 port 48398 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:19.605087 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:19.622615 systemd-logind[1485]: New session 10 of user core. Mar 10 01:48:19.627763 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:48:20.572732 sshd[5720]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:20.583445 systemd[1]: sshd@9-10.230.66.170:22-68.220.241.50:48398.service: Deactivated successfully. Mar 10 01:48:20.588504 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:48:20.594269 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:48:20.609424 systemd-logind[1485]: Removed session 10. Mar 10 01:48:22.334996 kubelet[2643]: I0310 01:48:22.334334 2643 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:48:25.676917 systemd[1]: Started sshd@10-10.230.66.170:22-68.220.241.50:60132.service - OpenSSH per-connection server daemon (68.220.241.50:60132). Mar 10 01:48:26.322115 sshd[5747]: Accepted publickey for core from 68.220.241.50 port 60132 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:26.325343 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:26.333664 systemd-logind[1485]: New session 11 of user core. Mar 10 01:48:26.339842 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:48:27.076931 sshd[5747]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:27.093143 systemd[1]: sshd@10-10.230.66.170:22-68.220.241.50:60132.service: Deactivated successfully. Mar 10 01:48:27.094204 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:48:27.097109 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:48:27.100984 systemd-logind[1485]: Removed session 11. Mar 10 01:48:32.173608 systemd[1]: Started sshd@11-10.230.66.170:22-68.220.241.50:47150.service - OpenSSH per-connection server daemon (68.220.241.50:47150). Mar 10 01:48:32.794498 sshd[5782]: Accepted publickey for core from 68.220.241.50 port 47150 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:32.797493 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:32.805738 systemd-logind[1485]: New session 12 of user core. Mar 10 01:48:32.811767 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:48:33.325115 sshd[5782]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:33.329928 systemd[1]: sshd@11-10.230.66.170:22-68.220.241.50:47150.service: Deactivated successfully. Mar 10 01:48:33.333186 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:48:33.336420 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:48:33.338763 systemd-logind[1485]: Removed session 12. Mar 10 01:48:38.438964 systemd[1]: Started sshd@12-10.230.66.170:22-68.220.241.50:47160.service - OpenSSH per-connection server daemon (68.220.241.50:47160). Mar 10 01:48:39.033570 sshd[5802]: Accepted publickey for core from 68.220.241.50 port 47160 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:39.036029 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:39.047648 systemd-logind[1485]: New session 13 of user core. Mar 10 01:48:39.056742 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:48:39.421865 systemd[1]: run-containerd-runc-k8s.io-b547e287443ae40dbc3c4d4af9a4d35f20fa4b4f02597fa7e26e1e3f391ffaca-runc.IWSbr0.mount: Deactivated successfully. Mar 10 01:48:39.608109 sshd[5802]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:39.614518 systemd[1]: sshd@12-10.230.66.170:22-68.220.241.50:47160.service: Deactivated successfully. Mar 10 01:48:39.617496 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:48:39.620321 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:48:39.623496 systemd-logind[1485]: Removed session 13. Mar 10 01:48:40.978501 systemd[1]: run-containerd-runc-k8s.io-21dccd618c353b991c04dea0579b1d5af6864f3e9adfe4e1524cd1e8ca478606-runc.1ob2it.mount: Deactivated successfully. Mar 10 01:48:44.711867 systemd[1]: Started sshd@13-10.230.66.170:22-68.220.241.50:41560.service - OpenSSH per-connection server daemon (68.220.241.50:41560). Mar 10 01:48:45.396818 sshd[5875]: Accepted publickey for core from 68.220.241.50 port 41560 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:45.400345 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:45.408547 systemd-logind[1485]: New session 14 of user core. Mar 10 01:48:45.416781 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:48:46.014357 sshd[5875]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:46.018941 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:48:46.019282 systemd[1]: sshd@13-10.230.66.170:22-68.220.241.50:41560.service: Deactivated successfully. Mar 10 01:48:46.024043 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:48:46.026082 systemd-logind[1485]: Removed session 14. Mar 10 01:48:46.121405 systemd[1]: Started sshd@14-10.230.66.170:22-68.220.241.50:41566.service - OpenSSH per-connection server daemon (68.220.241.50:41566). Mar 10 01:48:46.696393 sshd[5889]: Accepted publickey for core from 68.220.241.50 port 41566 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:46.698755 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:46.705342 systemd-logind[1485]: New session 15 of user core. Mar 10 01:48:46.710712 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:48:47.286896 sshd[5889]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:47.292210 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:48:47.292752 systemd[1]: sshd@14-10.230.66.170:22-68.220.241.50:41566.service: Deactivated successfully. Mar 10 01:48:47.298020 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:48:47.300422 systemd-logind[1485]: Removed session 15. Mar 10 01:48:47.386856 systemd[1]: Started sshd@15-10.230.66.170:22-68.220.241.50:41568.service - OpenSSH per-connection server daemon (68.220.241.50:41568). Mar 10 01:48:47.937661 sshd[5918]: Accepted publickey for core from 68.220.241.50 port 41568 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:47.940027 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:47.947825 systemd-logind[1485]: New session 16 of user core. Mar 10 01:48:47.954794 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:48:48.442676 sshd[5918]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:48.449415 systemd[1]: sshd@15-10.230.66.170:22-68.220.241.50:41568.service: Deactivated successfully. Mar 10 01:48:48.452414 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:48:48.453624 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:48:48.455321 systemd-logind[1485]: Removed session 16. Mar 10 01:48:51.256582 containerd[1511]: time="2026-03-10T01:48:51.236795234Z" level=info msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.649 [WARNING][5940] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0", GenerateName:"calico-kube-controllers-5b78bbc6cb-", Namespace:"calico-system", SelfLink:"", UID:"2c23487f-710c-4788-9a24-3cedb377bd4f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b78bbc6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae", Pod:"calico-kube-controllers-5b78bbc6cb-6czx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fda48ee21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.652 [INFO][5940] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.652 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" iface="eth0" netns="" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.652 [INFO][5940] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.652 [INFO][5940] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.858 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.861 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.862 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.880 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.881 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.885 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:51.890564 containerd[1511]: 2026-03-10 01:48:51.887 [INFO][5940] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:51.896785 containerd[1511]: time="2026-03-10T01:48:51.894749945Z" level=info msg="TearDown network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" successfully" Mar 10 01:48:51.896785 containerd[1511]: time="2026-03-10T01:48:51.894808347Z" level=info msg="StopPodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" returns successfully" Mar 10 01:48:51.909430 containerd[1511]: time="2026-03-10T01:48:51.909175259Z" level=info msg="RemovePodSandbox for \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" Mar 10 01:48:51.916589 containerd[1511]: time="2026-03-10T01:48:51.916546652Z" level=info msg="Forcibly stopping sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\"" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:51.971 [WARNING][5962] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0", GenerateName:"calico-kube-controllers-5b78bbc6cb-", Namespace:"calico-system", SelfLink:"", UID:"2c23487f-710c-4788-9a24-3cedb377bd4f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b78bbc6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"509d9324eab9d17630d84c05f62f69f84b899fa1bc80a303208e99b1fc7fc0ae", Pod:"calico-kube-controllers-5b78bbc6cb-6czx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fda48ee21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:51.971 [INFO][5962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:51.971 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" iface="eth0" netns="" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:51.971 [INFO][5962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:51.971 [INFO][5962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.022 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.022 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.022 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.030 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.030 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" HandleID="k8s-pod-network.c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--kube--controllers--5b78bbc6cb--6czx4-eth0" Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.032 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.042007 containerd[1511]: 2026-03-10 01:48:52.037 [INFO][5962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3" Mar 10 01:48:52.045138 containerd[1511]: time="2026-03-10T01:48:52.042057237Z" level=info msg="TearDown network for sandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" successfully" Mar 10 01:48:52.130914 containerd[1511]: time="2026-03-10T01:48:52.130823960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:52.131124 containerd[1511]: time="2026-03-10T01:48:52.130978847Z" level=info msg="RemovePodSandbox \"c01432b13c6876f2692250be36c226544a56fb10c552caef817036c37dbd8ee3\" returns successfully" Mar 10 01:48:52.133146 containerd[1511]: time="2026-03-10T01:48:52.133014334Z" level=info msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.194 [WARNING][5984] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c", Pod:"calico-apiserver-6d978ccb84-5rk88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali058c66febef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.194 [INFO][5984] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.194 [INFO][5984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" iface="eth0" netns="" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.194 [INFO][5984] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.194 [INFO][5984] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.242 [INFO][5992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.242 [INFO][5992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.242 [INFO][5992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.251 [WARNING][5992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.251 [INFO][5992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.253 [INFO][5992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.260081 containerd[1511]: 2026-03-10 01:48:52.256 [INFO][5984] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.260081 containerd[1511]: time="2026-03-10T01:48:52.259406381Z" level=info msg="TearDown network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" successfully" Mar 10 01:48:52.260081 containerd[1511]: time="2026-03-10T01:48:52.259440113Z" level=info msg="StopPodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" returns successfully" Mar 10 01:48:52.263677 containerd[1511]: time="2026-03-10T01:48:52.260263084Z" level=info msg="RemovePodSandbox for \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" Mar 10 01:48:52.263677 containerd[1511]: time="2026-03-10T01:48:52.260296032Z" level=info msg="Forcibly stopping sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\"" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.312 [WARNING][6006] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"ed5442cf-d7ee-42ff-87df-7d6e6ec79b47", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"49dc0e9c4dc328baf66529a16b129e7b40631a84b06408c6fc04248dc4128b6c", Pod:"calico-apiserver-6d978ccb84-5rk88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali058c66febef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.313 [INFO][6006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.313 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" iface="eth0" netns="" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.313 [INFO][6006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.313 [INFO][6006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.345 [INFO][6013] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.346 [INFO][6013] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.346 [INFO][6013] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.355 [WARNING][6013] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.355 [INFO][6013] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" HandleID="k8s-pod-network.ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--5rk88-eth0" Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.357 [INFO][6013] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.363108 containerd[1511]: 2026-03-10 01:48:52.359 [INFO][6006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18" Mar 10 01:48:52.363108 containerd[1511]: time="2026-03-10T01:48:52.362092041Z" level=info msg="TearDown network for sandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" successfully" Mar 10 01:48:52.366597 containerd[1511]: time="2026-03-10T01:48:52.366510986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:52.366722 containerd[1511]: time="2026-03-10T01:48:52.366619757Z" level=info msg="RemovePodSandbox \"ba75f5ccb2e0cfb4537f6e3e36764eeb58b76567c6f8670e77bba06ceeb0ef18\" returns successfully" Mar 10 01:48:52.367494 containerd[1511]: time="2026-03-10T01:48:52.367463173Z" level=info msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.422 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5", Pod:"goldmane-9f7667bb8-gcmtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c427e6c520", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.422 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.422 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" iface="eth0" netns="" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.422 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.423 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.458 [INFO][6034] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.459 [INFO][6034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.459 [INFO][6034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.469 [WARNING][6034] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.469 [INFO][6034] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.471 [INFO][6034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.475255 containerd[1511]: 2026-03-10 01:48:52.473 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.475255 containerd[1511]: time="2026-03-10T01:48:52.475077700Z" level=info msg="TearDown network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" successfully" Mar 10 01:48:52.475255 containerd[1511]: time="2026-03-10T01:48:52.475111662Z" level=info msg="StopPodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" returns successfully" Mar 10 01:48:52.476960 containerd[1511]: time="2026-03-10T01:48:52.476503154Z" level=info msg="RemovePodSandbox for \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" Mar 10 01:48:52.476960 containerd[1511]: time="2026-03-10T01:48:52.476581272Z" level=info msg="Forcibly stopping sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\"" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.607 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"940d2b11-362a-4d5f-8ab9-dd1a5a2dba04", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ff3989ab664549ecff173d89e2080c9e6b2a0ffd06ee5df319f4a9abce6c52e5", Pod:"goldmane-9f7667bb8-gcmtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c427e6c520", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.607 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.607 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" iface="eth0" netns="" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.607 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.607 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.675 [INFO][6056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.675 [INFO][6056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.675 [INFO][6056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.689 [WARNING][6056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.689 [INFO][6056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" HandleID="k8s-pod-network.b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Workload="srv--p0r5l.gb1.brightbox.com-k8s-goldmane--9f7667bb8--gcmtp-eth0" Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.691 [INFO][6056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.698791 containerd[1511]: 2026-03-10 01:48:52.696 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8" Mar 10 01:48:52.698791 containerd[1511]: time="2026-03-10T01:48:52.698753000Z" level=info msg="TearDown network for sandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" successfully" Mar 10 01:48:52.703706 containerd[1511]: time="2026-03-10T01:48:52.703655325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:52.703802 containerd[1511]: time="2026-03-10T01:48:52.703738663Z" level=info msg="RemovePodSandbox \"b17625cded1835c6f8adaf53b91647ae4e51d1d3b8cfa7d2ee0fa819c7e027f8\" returns successfully" Mar 10 01:48:52.704714 containerd[1511]: time="2026-03-10T01:48:52.704243686Z" level=info msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.766 [WARNING][6070] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6e988b93-f8c5-467f-9c4d-91992323f92f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179", Pod:"coredns-7d764666f9-nsjn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ca89dde9ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.767 [INFO][6070] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.767 [INFO][6070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" iface="eth0" netns="" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.767 [INFO][6070] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.767 [INFO][6070] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.798 [INFO][6077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.800 [INFO][6077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.800 [INFO][6077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.811 [WARNING][6077] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.811 [INFO][6077] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.814 [INFO][6077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.820671 containerd[1511]: 2026-03-10 01:48:52.817 [INFO][6070] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.825127 containerd[1511]: time="2026-03-10T01:48:52.820816772Z" level=info msg="TearDown network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" successfully" Mar 10 01:48:52.825127 containerd[1511]: time="2026-03-10T01:48:52.820959078Z" level=info msg="StopPodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" returns successfully" Mar 10 01:48:52.825127 containerd[1511]: time="2026-03-10T01:48:52.821971390Z" level=info msg="RemovePodSandbox for \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" Mar 10 01:48:52.825127 containerd[1511]: time="2026-03-10T01:48:52.822009498Z" level=info msg="Forcibly stopping sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\"" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.875 [WARNING][6091] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6e988b93-f8c5-467f-9c4d-91992323f92f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"ce55907021f61dbab9d1b0dd9a347df20f30b7512c0b10530ab400ab44e65179", Pod:"coredns-7d764666f9-nsjn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ca89dde9ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.876 [INFO][6091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.876 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" iface="eth0" netns="" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.876 [INFO][6091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.876 [INFO][6091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.908 [INFO][6098] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.909 [INFO][6098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.909 [INFO][6098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.918 [WARNING][6098] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.918 [INFO][6098] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" HandleID="k8s-pod-network.6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--nsjn4-eth0" Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.920 [INFO][6098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:52.924516 containerd[1511]: 2026-03-10 01:48:52.922 [INFO][6091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422" Mar 10 01:48:52.925391 containerd[1511]: time="2026-03-10T01:48:52.924601606Z" level=info msg="TearDown network for sandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" successfully" Mar 10 01:48:52.928965 containerd[1511]: time="2026-03-10T01:48:52.928911505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:52.936539 containerd[1511]: time="2026-03-10T01:48:52.936291620Z" level=info msg="RemovePodSandbox \"6bc22bd206d477736a1afcd72490866ff06efad760da925d4edb56ef5152d422\" returns successfully" Mar 10 01:48:52.937709 containerd[1511]: time="2026-03-10T01:48:52.937457841Z" level=info msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:52.995 [WARNING][6112] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"ab365807-38ad-4172-ae3a-3060d423ffa4", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80", Pod:"coredns-7d764666f9-s4jtv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5febc3da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:52.996 [INFO][6112] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:52.996 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" iface="eth0" netns="" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:52.996 [INFO][6112] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:52.996 [INFO][6112] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.039 [INFO][6119] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.039 [INFO][6119] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.039 [INFO][6119] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.051 [WARNING][6119] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.051 [INFO][6119] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.054 [INFO][6119] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:53.058383 containerd[1511]: 2026-03-10 01:48:53.056 [INFO][6112] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.058383 containerd[1511]: time="2026-03-10T01:48:53.058108014Z" level=info msg="TearDown network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" successfully" Mar 10 01:48:53.058383 containerd[1511]: time="2026-03-10T01:48:53.058152461Z" level=info msg="StopPodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" returns successfully" Mar 10 01:48:53.062658 containerd[1511]: time="2026-03-10T01:48:53.060863869Z" level=info msg="RemovePodSandbox for \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" Mar 10 01:48:53.062658 containerd[1511]: time="2026-03-10T01:48:53.061267306Z" level=info msg="Forcibly stopping sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\"" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.112 [WARNING][6134] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"ab365807-38ad-4172-ae3a-3060d423ffa4", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"251ead1e4a3f7b552fb17f22216404498fe10e68e999006d288cea615fd07e80", Pod:"coredns-7d764666f9-s4jtv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5febc3da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.113 [INFO][6134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.113 [INFO][6134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" iface="eth0" netns="" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.113 [INFO][6134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.113 [INFO][6134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.145 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.145 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.145 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.154 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.154 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" HandleID="k8s-pod-network.5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Workload="srv--p0r5l.gb1.brightbox.com-k8s-coredns--7d764666f9--s4jtv-eth0" Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.156 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:53.161260 containerd[1511]: 2026-03-10 01:48:53.159 [INFO][6134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0" Mar 10 01:48:53.161260 containerd[1511]: time="2026-03-10T01:48:53.161067870Z" level=info msg="TearDown network for sandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" successfully" Mar 10 01:48:53.165168 containerd[1511]: time="2026-03-10T01:48:53.165121236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:53.165272 containerd[1511]: time="2026-03-10T01:48:53.165201132Z" level=info msg="RemovePodSandbox \"5418e80e27a04afa25a322c99f110acd3681b2d82525aee938f10d212dbbb3f0\" returns successfully" Mar 10 01:48:53.166381 containerd[1511]: time="2026-03-10T01:48:53.165954036Z" level=info msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.220 [WARNING][6155] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"210382ca-0415-41bb-ab2d-47423643647d", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e", Pod:"calico-apiserver-6d978ccb84-c5q29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2692df727e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.220 [INFO][6155] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.220 [INFO][6155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" iface="eth0" netns="" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.220 [INFO][6155] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.220 [INFO][6155] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.254 [INFO][6162] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.254 [INFO][6162] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.254 [INFO][6162] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.263 [WARNING][6162] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.263 [INFO][6162] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.265 [INFO][6162] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:53.270371 containerd[1511]: 2026-03-10 01:48:53.268 [INFO][6155] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.273914 containerd[1511]: time="2026-03-10T01:48:53.270635915Z" level=info msg="TearDown network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" successfully" Mar 10 01:48:53.273914 containerd[1511]: time="2026-03-10T01:48:53.270670740Z" level=info msg="StopPodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" returns successfully" Mar 10 01:48:53.273914 containerd[1511]: time="2026-03-10T01:48:53.271991164Z" level=info msg="RemovePodSandbox for \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" Mar 10 01:48:53.273914 containerd[1511]: time="2026-03-10T01:48:53.272026164Z" level=info msg="Forcibly stopping sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\"" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.333 [WARNING][6176] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0", GenerateName:"calico-apiserver-6d978ccb84-", Namespace:"calico-system", SelfLink:"", UID:"210382ca-0415-41bb-ab2d-47423643647d", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d978ccb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0r5l.gb1.brightbox.com", ContainerID:"0c5df314b63481d86691aec505fd6ef79567be85b8a3f2703de8098cf5e7447e", Pod:"calico-apiserver-6d978ccb84-c5q29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2692df727e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.333 [INFO][6176] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.333 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" iface="eth0" netns="" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.333 [INFO][6176] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.333 [INFO][6176] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.404 [INFO][6183] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.404 [INFO][6183] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.404 [INFO][6183] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.416 [WARNING][6183] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.416 [INFO][6183] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" HandleID="k8s-pod-network.08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Workload="srv--p0r5l.gb1.brightbox.com-k8s-calico--apiserver--6d978ccb84--c5q29-eth0" Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.419 [INFO][6183] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:48:53.428038 containerd[1511]: 2026-03-10 01:48:53.423 [INFO][6176] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb" Mar 10 01:48:53.429194 containerd[1511]: time="2026-03-10T01:48:53.428094115Z" level=info msg="TearDown network for sandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" successfully" Mar 10 01:48:53.434803 containerd[1511]: time="2026-03-10T01:48:53.434588336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:48:53.434803 containerd[1511]: time="2026-03-10T01:48:53.434707097Z" level=info msg="RemovePodSandbox \"08f2ceb9c92be21236add68e4e68ef1e511a91933948ed47119f75dc5a86c3fb\" returns successfully" Mar 10 01:48:53.550392 systemd[1]: Started sshd@16-10.230.66.170:22-68.220.241.50:44446.service - OpenSSH per-connection server daemon (68.220.241.50:44446). Mar 10 01:48:54.227081 sshd[6192]: Accepted publickey for core from 68.220.241.50 port 44446 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:54.230414 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:54.237798 systemd-logind[1485]: New session 17 of user core. Mar 10 01:48:54.242934 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:48:55.232204 sshd[6192]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:55.241129 systemd[1]: sshd@16-10.230.66.170:22-68.220.241.50:44446.service: Deactivated successfully. Mar 10 01:48:55.243943 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:48:55.245053 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:48:55.246605 systemd-logind[1485]: Removed session 17. Mar 10 01:48:55.339987 systemd[1]: Started sshd@17-10.230.66.170:22-68.220.241.50:44454.service - OpenSSH per-connection server daemon (68.220.241.50:44454). Mar 10 01:48:55.926543 sshd[6205]: Accepted publickey for core from 68.220.241.50 port 44454 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:55.928716 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:55.936072 systemd-logind[1485]: New session 18 of user core. Mar 10 01:48:55.942960 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:48:56.758323 sshd[6205]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:56.773010 systemd[1]: sshd@17-10.230.66.170:22-68.220.241.50:44454.service: Deactivated successfully. Mar 10 01:48:56.776629 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:48:56.779200 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:48:56.780881 systemd-logind[1485]: Removed session 18. Mar 10 01:48:56.861862 systemd[1]: Started sshd@18-10.230.66.170:22-68.220.241.50:44468.service - OpenSSH per-connection server daemon (68.220.241.50:44468). Mar 10 01:48:57.463674 sshd[6236]: Accepted publickey for core from 68.220.241.50 port 44468 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:57.467083 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:57.475290 systemd-logind[1485]: New session 19 of user core. Mar 10 01:48:57.480737 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:48:58.780404 sshd[6236]: pam_unix(sshd:session): session closed for user core Mar 10 01:48:58.795855 systemd[1]: sshd@18-10.230.66.170:22-68.220.241.50:44468.service: Deactivated successfully. Mar 10 01:48:58.804206 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:48:58.809198 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:48:58.814077 systemd-logind[1485]: Removed session 19. Mar 10 01:48:58.888986 systemd[1]: Started sshd@19-10.230.66.170:22-68.220.241.50:44470.service - OpenSSH per-connection server daemon (68.220.241.50:44470). Mar 10 01:48:59.519351 sshd[6262]: Accepted publickey for core from 68.220.241.50 port 44470 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:48:59.522261 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:48:59.530184 systemd-logind[1485]: New session 20 of user core. Mar 10 01:48:59.538782 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:49:00.781873 sshd[6262]: pam_unix(sshd:session): session closed for user core Mar 10 01:49:00.790312 systemd[1]: sshd@19-10.230.66.170:22-68.220.241.50:44470.service: Deactivated successfully. Mar 10 01:49:00.793603 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:49:00.794861 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:49:00.796662 systemd-logind[1485]: Removed session 20. Mar 10 01:49:00.880915 systemd[1]: Started sshd@20-10.230.66.170:22-68.220.241.50:44478.service - OpenSSH per-connection server daemon (68.220.241.50:44478). Mar 10 01:49:01.507363 sshd[6295]: Accepted publickey for core from 68.220.241.50 port 44478 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:49:01.509278 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:49:01.517155 systemd-logind[1485]: New session 21 of user core. Mar 10 01:49:01.521783 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:49:02.034164 sshd[6295]: pam_unix(sshd:session): session closed for user core Mar 10 01:49:02.040421 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:49:02.041233 systemd[1]: sshd@20-10.230.66.170:22-68.220.241.50:44478.service: Deactivated successfully. Mar 10 01:49:02.048201 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:49:02.051005 systemd-logind[1485]: Removed session 21. Mar 10 01:49:07.137034 systemd[1]: Started sshd@21-10.230.66.170:22-68.220.241.50:54052.service - OpenSSH per-connection server daemon (68.220.241.50:54052). Mar 10 01:49:07.208705 systemd[1]: Started sshd@22-10.230.66.170:22-159.65.30.95:46054.service - OpenSSH per-connection server daemon (159.65.30.95:46054). Mar 10 01:49:07.673166 sshd[6320]: Connection closed by authenticating user root 159.65.30.95 port 46054 [preauth] Mar 10 01:49:07.675861 systemd[1]: sshd@22-10.230.66.170:22-159.65.30.95:46054.service: Deactivated successfully. Mar 10 01:49:07.750425 sshd[6318]: Accepted publickey for core from 68.220.241.50 port 54052 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:49:07.752854 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:49:07.761501 systemd-logind[1485]: New session 22 of user core. Mar 10 01:49:07.766815 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:49:08.249901 sshd[6318]: pam_unix(sshd:session): session closed for user core Mar 10 01:49:08.255911 systemd[1]: sshd@21-10.230.66.170:22-68.220.241.50:54052.service: Deactivated successfully. Mar 10 01:49:08.258698 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:49:08.260942 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:49:08.262470 systemd-logind[1485]: Removed session 22. Mar 10 01:49:13.356452 systemd[1]: Started sshd@23-10.230.66.170:22-68.220.241.50:44068.service - OpenSSH per-connection server daemon (68.220.241.50:44068). Mar 10 01:49:14.026396 sshd[6379]: Accepted publickey for core from 68.220.241.50 port 44068 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:49:14.030031 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:49:14.042005 systemd-logind[1485]: New session 23 of user core. Mar 10 01:49:14.046951 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:49:14.687384 sshd[6379]: pam_unix(sshd:session): session closed for user core Mar 10 01:49:14.696610 systemd[1]: sshd@23-10.230.66.170:22-68.220.241.50:44068.service: Deactivated successfully. Mar 10 01:49:14.699947 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:49:14.701417 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:49:14.702926 systemd-logind[1485]: Removed session 23. Mar 10 01:49:19.801466 systemd[1]: Started sshd@24-10.230.66.170:22-68.220.241.50:44074.service - OpenSSH per-connection server daemon (68.220.241.50:44074). Mar 10 01:49:20.411758 sshd[6410]: Accepted publickey for core from 68.220.241.50 port 44074 ssh2: RSA SHA256:aijcv0CQPgs+ijPZDfhfY8yeUVP+ozwJgxdKg5gyU8s Mar 10 01:49:20.414894 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:49:20.423366 systemd-logind[1485]: New session 24 of user core. Mar 10 01:49:20.430767 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:49:21.025047 sshd[6410]: pam_unix(sshd:session): session closed for user core Mar 10 01:49:21.035105 systemd[1]: sshd@24-10.230.66.170:22-68.220.241.50:44074.service: Deactivated successfully. Mar 10 01:49:21.035914 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:49:21.039155 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:49:21.043868 systemd-logind[1485]: Removed session 24.