Apr 17 23:50:10.045334 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:50:10.045370 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:10.045384 kernel: BIOS-provided physical RAM map: Apr 17 23:50:10.045401 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 23:50:10.045411 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 23:50:10.045421 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:50:10.045433 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 17 23:50:10.045443 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 17 23:50:10.045454 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:50:10.045464 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:50:10.045474 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:50:10.045485 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:50:10.045501 kernel: NX (Execute Disable) protection: active Apr 17 23:50:10.045511 kernel: APIC: Static calls initialized Apr 17 23:50:10.045524 kernel: SMBIOS 2.8 present. Apr 17 23:50:10.045536 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Apr 17 23:50:10.045547 kernel: Hypervisor detected: KVM Apr 17 23:50:10.045564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:50:10.045575 kernel: kvm-clock: using sched offset of 4467905182 cycles Apr 17 23:50:10.045587 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:50:10.045600 kernel: tsc: Detected 2499.998 MHz processor Apr 17 23:50:10.045611 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:50:10.045623 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:50:10.045634 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 17 23:50:10.045646 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:50:10.045657 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:50:10.045674 kernel: Using GB pages for direct mapping Apr 17 23:50:10.045685 kernel: ACPI: Early table checksum verification disabled Apr 17 23:50:10.045697 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Apr 17 23:50:10.045708 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045827 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045841 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045853 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 17 23:50:10.045864 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045876 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045895 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045907 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:10.045919 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 17 23:50:10.045930 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 17 23:50:10.045942 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 17 23:50:10.045961 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 17 23:50:10.045973 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 17 23:50:10.045990 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 17 23:50:10.046002 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 17 23:50:10.046014 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:50:10.046026 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:50:10.046053 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 17 23:50:10.046065 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 17 23:50:10.046077 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 17 23:50:10.046095 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 17 23:50:10.046112 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 17 23:50:10.046125 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 17 23:50:10.046137 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 17 23:50:10.046149 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 17 23:50:10.046160 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 17 23:50:10.046172 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 17 23:50:10.046184 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 17 23:50:10.046196 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 17 23:50:10.046208 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 17 23:50:10.046236 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 17 23:50:10.046250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 17 23:50:10.046262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 17 23:50:10.046274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 17 23:50:10.046286 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 17 23:50:10.046299 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 17 23:50:10.046311 kernel: Zone ranges: Apr 17 23:50:10.046323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:50:10.046335 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 17 23:50:10.046352 kernel: Normal empty Apr 17 23:50:10.046365 kernel: Movable zone start for each node Apr 17 23:50:10.046377 kernel: Early memory node ranges Apr 17 23:50:10.046389 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:50:10.046400 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 17 23:50:10.046412 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 17 23:50:10.046424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:50:10.046436 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:50:10.046448 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 17 23:50:10.046460 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:50:10.046477 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:50:10.046490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:50:10.046502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:50:10.046514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:50:10.046526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:50:10.046538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:50:10.046550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:50:10.046562 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:50:10.046574 kernel: TSC deadline timer available Apr 17 23:50:10.046591 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 17 23:50:10.046603 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:50:10.046615 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:50:10.046627 kernel: Booting paravirtualized kernel on KVM Apr 17 23:50:10.046639 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:50:10.046652 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 17 23:50:10.046664 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Apr 17 23:50:10.046675 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Apr 17 23:50:10.046687 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 17 23:50:10.046705 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:50:10.046733 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:50:10.046748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:10.046761 kernel: random: crng init done Apr 17 23:50:10.046773 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:50:10.046785 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:50:10.046797 kernel: Fallback order for Node 0: 0 Apr 17 23:50:10.046809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 17 23:50:10.046828 kernel: Policy zone: DMA32 Apr 17 23:50:10.046841 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:50:10.046853 kernel: software IO TLB: area num 16. Apr 17 23:50:10.046865 kernel: Memory: 1901584K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194772K reserved, 0K cma-reserved) Apr 17 23:50:10.046878 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 17 23:50:10.046890 kernel: Kernel/User page tables isolation: enabled Apr 17 23:50:10.046902 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:50:10.046914 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:50:10.046926 kernel: Dynamic Preempt: voluntary Apr 17 23:50:10.046944 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:50:10.046957 kernel: rcu: RCU event tracing is enabled. Apr 17 23:50:10.046970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 17 23:50:10.046982 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:50:10.046994 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:50:10.047021 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:50:10.047034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:50:10.047047 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 17 23:50:10.047060 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 17 23:50:10.047072 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:50:10.047085 kernel: Console: colour VGA+ 80x25 Apr 17 23:50:10.047097 kernel: printk: console [tty0] enabled Apr 17 23:50:10.047116 kernel: printk: console [ttyS0] enabled Apr 17 23:50:10.047128 kernel: ACPI: Core revision 20230628 Apr 17 23:50:10.047141 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:50:10.047154 kernel: x2apic enabled Apr 17 23:50:10.047166 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:50:10.047185 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:50:10.047198 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 17 23:50:10.047210 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:50:10.047232 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:50:10.047247 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:50:10.047260 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:50:10.047272 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:50:10.047284 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:50:10.047297 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 23:50:10.047310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:50:10.047328 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:50:10.047341 kernel: MDS: Mitigation: Clear CPU buffers Apr 17 23:50:10.047353 kernel: MMIO Stale Data: Unknown: No mitigations Apr 17 23:50:10.047366 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 17 23:50:10.047378 kernel: active return thunk: its_return_thunk Apr 17 23:50:10.047391 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:50:10.047403 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:50:10.047416 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:50:10.047429 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:50:10.047441 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:50:10.047454 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 17 23:50:10.047472 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:50:10.047484 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:50:10.047497 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:50:10.047509 kernel: landlock: Up and running. Apr 17 23:50:10.047522 kernel: SELinux: Initializing. Apr 17 23:50:10.047534 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:50:10.047547 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:50:10.047559 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 17 23:50:10.047572 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 17 23:50:10.047585 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 17 23:50:10.047603 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 17 23:50:10.047616 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 17 23:50:10.047629 kernel: signal: max sigframe size: 1776 Apr 17 23:50:10.047642 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:50:10.047654 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:50:10.047667 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:50:10.047680 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:50:10.047692 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:50:10.047705 kernel: .... node #0, CPUs: #1 Apr 17 23:50:10.050458 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 17 23:50:10.050475 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:50:10.050489 kernel: smpboot: Max logical packages: 16 Apr 17 23:50:10.050502 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 17 23:50:10.050514 kernel: devtmpfs: initialized Apr 17 23:50:10.050527 kernel: x86/mm: Memory block size: 128MB Apr 17 23:50:10.050540 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:50:10.050553 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 17 23:50:10.050565 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:50:10.050587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:50:10.050600 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:50:10.050613 kernel: audit: type=2000 audit(1776469809.043:1): state=initialized audit_enabled=0 res=1 Apr 17 23:50:10.050626 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:50:10.050638 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:50:10.050651 kernel: cpuidle: using governor menu Apr 17 23:50:10.050664 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:50:10.050677 kernel: dca service started, version 1.12.1 Apr 17 23:50:10.050690 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:50:10.050708 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:50:10.050763 kernel: PCI: Using configuration type 1 for base access Apr 17 23:50:10.050794 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:50:10.050807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:50:10.050820 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:50:10.050833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:50:10.050845 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:50:10.050858 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:50:10.050871 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:50:10.050891 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:50:10.050905 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:50:10.050918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:50:10.050930 kernel: ACPI: Interpreter enabled Apr 17 23:50:10.050943 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:50:10.050956 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:50:10.050969 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:50:10.050981 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:50:10.050994 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:50:10.051012 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:50:10.051297 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:50:10.051492 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:50:10.051670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:50:10.051690 kernel: PCI host bridge to bus 0000:00 Apr 17 23:50:10.052547 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:50:10.053581 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:50:10.053808 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:50:10.053980 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 23:50:10.054142 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:50:10.054318 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 17 23:50:10.054481 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:50:10.054700 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:50:10.054936 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 17 23:50:10.055118 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 17 23:50:10.055309 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 17 23:50:10.055488 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 17 23:50:10.055665 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:50:10.057912 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.058102 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 17 23:50:10.058334 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.058519 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 17 23:50:10.059810 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.060028 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 17 23:50:10.060266 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.060446 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 17 23:50:10.060640 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.061942 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 17 23:50:10.062158 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.062360 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 17 23:50:10.062547 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.065820 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 17 23:50:10.066035 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 17 23:50:10.066214 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 17 23:50:10.066415 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:50:10.066593 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 17 23:50:10.066823 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 17 23:50:10.067001 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 17 23:50:10.067175 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 17 23:50:10.067385 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:50:10.067561 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 17 23:50:10.067755 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 17 23:50:10.067932 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 17 23:50:10.068121 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:50:10.068314 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:50:10.068508 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:50:10.068693 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Apr 17 23:50:10.070885 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 17 23:50:10.071105 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:50:10.071305 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:50:10.071511 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 17 23:50:10.071700 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 17 23:50:10.071910 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 17 23:50:10.072089 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 17 23:50:10.072291 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 17 23:50:10.072494 kernel: pci_bus 0000:02: extended config space not accessible Apr 17 23:50:10.072697 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 17 23:50:10.073848 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 17 23:50:10.074046 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 17 23:50:10.074238 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 17 23:50:10.074438 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 17 23:50:10.074620 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 17 23:50:10.074830 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 17 23:50:10.075010 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 17 23:50:10.075188 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 17 23:50:10.075396 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 17 23:50:10.075593 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 17 23:50:10.075850 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 17 23:50:10.076026 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 17 23:50:10.076200 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 17 23:50:10.076398 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 17 23:50:10.076571 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 17 23:50:10.076759 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 17 23:50:10.076944 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 17 23:50:10.077120 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 17 23:50:10.077308 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 17 23:50:10.077487 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 17 23:50:10.077660 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 17 23:50:10.081897 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 17 23:50:10.082094 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 17 23:50:10.082289 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 17 23:50:10.082477 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 17 23:50:10.082656 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 17 23:50:10.082851 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 17 23:50:10.083026 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 17 23:50:10.083047 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:50:10.083060 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:50:10.083074 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:50:10.083086 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:50:10.083099 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:50:10.083121 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:50:10.083134 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:50:10.083147 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:50:10.083160 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:50:10.083172 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:50:10.083185 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:50:10.083198 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:50:10.083211 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:50:10.083235 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:50:10.083256 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:50:10.083269 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:50:10.083282 kernel: iommu: Default domain type: Translated Apr 17 23:50:10.083296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:50:10.083309 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:50:10.083321 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:50:10.083334 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 23:50:10.083347 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 17 23:50:10.083522 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:50:10.083707 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:50:10.085973 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:50:10.085996 kernel: vgaarb: loaded Apr 17 23:50:10.086011 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:50:10.086031 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:50:10.086060 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:50:10.086082 kernel: pnp: PnP ACPI init Apr 17 23:50:10.086296 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:50:10.086327 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:50:10.086341 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:50:10.086354 kernel: NET: Registered PF_INET protocol family Apr 17 23:50:10.086367 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:50:10.086380 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:50:10.086393 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:50:10.086406 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:50:10.086419 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:50:10.086448 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:50:10.086462 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:50:10.086475 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:50:10.086488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:50:10.086501 kernel: NET: Registered PF_XDP protocol family Apr 17 23:50:10.086678 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Apr 17 23:50:10.086875 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 17 23:50:10.087053 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 17 23:50:10.087252 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 17 23:50:10.087430 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 17 23:50:10.087605 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 17 23:50:10.088923 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 17 23:50:10.089107 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 17 23:50:10.089302 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 17 23:50:10.089490 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 17 23:50:10.089665 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 17 23:50:10.089894 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 17 23:50:10.090069 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 17 23:50:10.090255 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 17 23:50:10.090430 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 17 23:50:10.090601 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 17 23:50:10.090827 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 17 23:50:10.091037 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 17 23:50:10.091211 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 17 23:50:10.091400 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 17 23:50:10.091587 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 17 23:50:10.091811 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 17 23:50:10.092000 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 17 23:50:10.092179 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 17 23:50:10.092368 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 17 23:50:10.092544 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 17 23:50:10.092785 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 17 23:50:10.092963 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 17 23:50:10.093135 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 17 23:50:10.093332 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 17 23:50:10.093506 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 17 23:50:10.093685 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 17 23:50:10.093883 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 17 23:50:10.094057 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 17 23:50:10.094242 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 17 23:50:10.094421 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 17 23:50:10.094596 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 17 23:50:10.094800 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 17 23:50:10.094981 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 17 23:50:10.095161 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 17 23:50:10.095366 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 17 23:50:10.095542 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 17 23:50:10.095755 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 17 23:50:10.095945 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 17 23:50:10.096129 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 17 23:50:10.096327 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 17 23:50:10.096500 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 17 23:50:10.096673 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 17 23:50:10.096876 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 17 23:50:10.097050 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 17 23:50:10.097219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:50:10.097393 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:50:10.097601 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:50:10.097872 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 23:50:10.098060 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:50:10.098217 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 17 23:50:10.098405 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 17 23:50:10.098571 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 17 23:50:10.098764 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 17 23:50:10.098944 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 17 23:50:10.099119 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Apr 17 23:50:10.099308 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 17 23:50:10.099473 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 17 23:50:10.099657 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Apr 17 23:50:10.099936 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 17 23:50:10.100169 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 17 23:50:10.100480 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 17 23:50:10.100672 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 17 23:50:10.100889 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 17 23:50:10.101087 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Apr 17 23:50:10.101269 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 17 23:50:10.101446 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 17 23:50:10.101634 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Apr 17 23:50:10.101869 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 17 23:50:10.102046 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 17 23:50:10.102222 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Apr 17 23:50:10.102402 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 17 23:50:10.102568 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 17 23:50:10.102775 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Apr 17 23:50:10.102944 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 17 23:50:10.103121 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 17 23:50:10.103162 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:50:10.103177 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:50:10.103191 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:50:10.103205 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 17 23:50:10.103218 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:50:10.103245 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:50:10.103259 kernel: Initialise system trusted keyrings Apr 17 23:50:10.103273 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:50:10.103293 kernel: Key type asymmetric registered Apr 17 23:50:10.103306 kernel: Asymmetric key parser 'x509' registered Apr 17 23:50:10.103320 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:50:10.103333 kernel: io scheduler mq-deadline registered Apr 17 23:50:10.103346 kernel: io scheduler kyber registered Apr 17 23:50:10.103360 kernel: io scheduler bfq registered Apr 17 23:50:10.103539 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 17 23:50:10.103776 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 17 23:50:10.103964 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.104168 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 17 23:50:10.104368 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 17 23:50:10.104544 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.104742 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 17 23:50:10.104923 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 17 23:50:10.105145 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.105364 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 17 23:50:10.105542 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 17 23:50:10.105756 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.105940 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 17 23:50:10.106124 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 17 23:50:10.106324 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.106509 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 17 23:50:10.106687 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 17 23:50:10.106899 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.107078 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 17 23:50:10.107267 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 17 23:50:10.107445 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.107630 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 17 23:50:10.107851 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 17 23:50:10.108028 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:50:10.108050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:50:10.108065 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:50:10.108079 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:50:10.108101 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:50:10.108115 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:50:10.108129 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:50:10.108142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:50:10.108156 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:50:10.108358 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 23:50:10.108380 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:50:10.108540 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 23:50:10.108713 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T23:50:09 UTC (1776469809) Apr 17 23:50:10.108897 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:50:10.108918 kernel: intel_pstate: CPU model not supported Apr 17 23:50:10.108932 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:50:10.108946 kernel: Segment Routing with IPv6 Apr 17 23:50:10.108960 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:50:10.108974 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:50:10.108988 kernel: Key type dns_resolver registered Apr 17 23:50:10.109001 kernel: IPI shorthand broadcast: enabled Apr 17 23:50:10.109026 kernel: sched_clock: Marking stable (1322003782, 232429443)->(1679601325, -125168100) Apr 17 23:50:10.109040 kernel: registered taskstats version 1 Apr 17 23:50:10.109053 kernel: Loading compiled-in X.509 certificates Apr 17 23:50:10.109067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:50:10.109086 kernel: Key type .fscrypt registered Apr 17 23:50:10.109099 kernel: Key type fscrypt-provisioning registered Apr 17 23:50:10.109112 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:50:10.109125 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:50:10.109138 kernel: ima: No architecture policies found Apr 17 23:50:10.109157 kernel: clk: Disabling unused clocks Apr 17 23:50:10.109173 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:50:10.109186 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:50:10.109200 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:50:10.109213 kernel: Run /init as init process Apr 17 23:50:10.109239 kernel: with arguments: Apr 17 23:50:10.109253 kernel: /init Apr 17 23:50:10.109266 kernel: with environment: Apr 17 23:50:10.109279 kernel: HOME=/ Apr 17 23:50:10.109292 kernel: TERM=linux Apr 17 23:50:10.109316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:50:10.109332 systemd[1]: Detected virtualization kvm. Apr 17 23:50:10.109347 systemd[1]: Detected architecture x86-64. Apr 17 23:50:10.109361 systemd[1]: Running in initrd. Apr 17 23:50:10.109375 systemd[1]: No hostname configured, using default hostname. Apr 17 23:50:10.109389 systemd[1]: Hostname set to . Apr 17 23:50:10.109404 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:50:10.109424 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:50:10.109439 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:10.109453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:10.109468 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:50:10.109483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:50:10.109497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:50:10.109517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:50:10.109539 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:50:10.109554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:50:10.109568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:10.109583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:10.109597 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:50:10.109612 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:50:10.109626 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:50:10.109641 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:50:10.109661 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:50:10.109684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:50:10.109699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:50:10.109713 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:50:10.109786 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:10.109802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:10.109816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:10.109831 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:50:10.109845 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:50:10.109868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:50:10.109883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:50:10.109898 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:50:10.109912 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:50:10.109927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:50:10.109941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:10.109956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:50:10.110020 systemd-journald[203]: Collecting audit messages is disabled. Apr 17 23:50:10.110059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:10.110074 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:50:10.110095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:50:10.110111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:50:10.110126 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:50:10.110140 kernel: Bridge firewalling registered Apr 17 23:50:10.110154 systemd-journald[203]: Journal started Apr 17 23:50:10.110186 systemd-journald[203]: Runtime Journal (/run/log/journal/f35a8cc9a4c64762b677def7369f7fbb) is 4.7M, max 38.0M, 33.2M free. Apr 17 23:50:10.039705 systemd-modules-load[204]: Inserted module 'overlay' Apr 17 23:50:10.106762 systemd-modules-load[204]: Inserted module 'br_netfilter' Apr 17 23:50:10.164748 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:50:10.166105 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:10.167143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:10.175952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:10.187936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:50:10.190213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:50:10.201831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:50:10.208072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:10.219020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:10.224928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:50:10.232985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:50:10.234108 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:10.241907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:50:10.259389 dracut-cmdline[239]: dracut-dracut-053 Apr 17 23:50:10.264318 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:10.278685 systemd-resolved[237]: Positive Trust Anchors: Apr 17 23:50:10.278702 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:50:10.280808 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:50:10.285648 systemd-resolved[237]: Defaulting to hostname 'linux'. Apr 17 23:50:10.288272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:50:10.290189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:50:10.375774 kernel: SCSI subsystem initialized Apr 17 23:50:10.387758 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:50:10.400759 kernel: iscsi: registered transport (tcp) Apr 17 23:50:10.426790 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:50:10.426896 kernel: QLogic iSCSI HBA Driver Apr 17 23:50:10.485049 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:50:10.492977 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:50:10.538337 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:50:10.538408 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:50:10.540922 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:50:10.588761 kernel: raid6: sse2x4 gen() 7653 MB/s Apr 17 23:50:10.606761 kernel: raid6: sse2x2 gen() 5472 MB/s Apr 17 23:50:10.625409 kernel: raid6: sse2x1 gen() 5459 MB/s Apr 17 23:50:10.625474 kernel: raid6: using algorithm sse2x4 gen() 7653 MB/s Apr 17 23:50:10.644440 kernel: raid6: .... xor() 5065 MB/s, rmw enabled Apr 17 23:50:10.644515 kernel: raid6: using ssse3x2 recovery algorithm Apr 17 23:50:10.670755 kernel: xor: automatically using best checksumming function avx Apr 17 23:50:10.865116 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:50:10.879398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:50:10.887981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:10.914509 systemd-udevd[422]: Using default interface naming scheme 'v255'. Apr 17 23:50:10.923048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:10.933349 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:50:10.952671 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 17 23:50:10.994706 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:50:11.000909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:50:11.131255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:11.141394 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:50:11.170826 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:50:11.173793 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:50:11.174565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:11.176141 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:50:11.184517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:50:11.213117 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:50:11.265806 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 17 23:50:11.289775 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 17 23:50:11.291736 kernel: ACPI: bus type USB registered Apr 17 23:50:11.291773 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:50:11.301766 kernel: usbcore: registered new interface driver usbfs Apr 17 23:50:11.309320 kernel: AVX version of gcm_enc/dec engaged. Apr 17 23:50:11.309391 kernel: usbcore: registered new interface driver hub Apr 17 23:50:11.311736 kernel: AES CTR mode by8 optimization enabled Apr 17 23:50:11.315744 kernel: usbcore: registered new device driver usb Apr 17 23:50:11.322294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:50:11.322331 kernel: GPT:17805311 != 125829119 Apr 17 23:50:11.322362 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:50:11.322380 kernel: GPT:17805311 != 125829119 Apr 17 23:50:11.322396 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:50:11.325105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:11.335242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:50:11.336939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:11.343020 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:11.346845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:50:11.347049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:11.350473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:11.360041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:11.376853 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 17 23:50:11.377243 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 17 23:50:11.382162 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 17 23:50:11.392737 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 17 23:50:11.393024 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 17 23:50:11.393261 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 17 23:50:11.393473 kernel: hub 1-0:1.0: USB hub found Apr 17 23:50:11.393705 kernel: hub 1-0:1.0: 4 ports detected Apr 17 23:50:11.397759 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 17 23:50:11.405743 kernel: hub 2-0:1.0: USB hub found Apr 17 23:50:11.406023 kernel: hub 2-0:1.0: 4 ports detected Apr 17 23:50:11.412636 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (472) Apr 17 23:50:11.439770 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 17 23:50:11.442703 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:50:11.538497 kernel: libata version 3.00 loaded. Apr 17 23:50:11.538534 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:50:11.538938 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:50:11.538960 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:50:11.539185 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:50:11.539418 kernel: scsi host0: ahci Apr 17 23:50:11.539650 kernel: scsi host1: ahci Apr 17 23:50:11.540841 kernel: scsi host2: ahci Apr 17 23:50:11.541079 kernel: scsi host3: ahci Apr 17 23:50:11.541300 kernel: scsi host4: ahci Apr 17 23:50:11.541504 kernel: scsi host5: ahci Apr 17 23:50:11.541751 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Apr 17 23:50:11.541775 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Apr 17 23:50:11.541793 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Apr 17 23:50:11.541819 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Apr 17 23:50:11.541838 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Apr 17 23:50:11.541856 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Apr 17 23:50:11.539317 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:50:11.542263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:11.556756 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:50:11.564333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:50:11.571536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:50:11.586043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:50:11.590954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:11.596109 disk-uuid[564]: Primary Header is updated. Apr 17 23:50:11.596109 disk-uuid[564]: Secondary Entries is updated. Apr 17 23:50:11.596109 disk-uuid[564]: Secondary Header is updated. Apr 17 23:50:11.604193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:11.612744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:11.631409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:11.640741 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 17 23:50:11.784767 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:50:11.805207 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.805318 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.805746 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.808732 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.811166 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.811219 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:11.822667 kernel: usbcore: registered new interface driver usbhid Apr 17 23:50:11.822712 kernel: usbhid: USB HID core driver Apr 17 23:50:11.830885 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 17 23:50:11.830944 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 17 23:50:12.620775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:12.620961 disk-uuid[566]: The operation has completed successfully. Apr 17 23:50:12.696779 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:50:12.696983 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:50:12.710161 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:50:12.729295 sh[585]: Success Apr 17 23:50:12.747907 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 17 23:50:12.816026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:50:12.820242 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:50:12.823380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:50:12.847845 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:50:12.847922 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:12.847944 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:50:12.850277 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:50:12.853518 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:50:12.863335 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:50:12.864969 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:50:12.875985 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:50:12.880931 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:50:12.902869 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:12.902934 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:12.902956 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:12.909750 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:12.924255 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:50:12.925118 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:12.934804 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:50:12.941877 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:50:13.033197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:50:13.049961 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:50:13.084388 systemd-networkd[767]: lo: Link UP Apr 17 23:50:13.084755 systemd-networkd[767]: lo: Gained carrier Apr 17 23:50:13.089230 systemd-networkd[767]: Enumeration completed Apr 17 23:50:13.089386 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:50:13.090293 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:13.090298 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:50:13.092199 systemd[1]: Reached target network.target - Network. Apr 17 23:50:13.095957 systemd-networkd[767]: eth0: Link UP Apr 17 23:50:13.095964 systemd-networkd[767]: eth0: Gained carrier Apr 17 23:50:13.095982 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:13.103943 ignition[677]: Ignition 2.19.0 Apr 17 23:50:13.103966 ignition[677]: Stage: fetch-offline Apr 17 23:50:13.104050 ignition[677]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:13.106063 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:50:13.104076 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:13.104270 ignition[677]: parsed url from cmdline: "" Apr 17 23:50:13.104277 ignition[677]: no config URL provided Apr 17 23:50:13.104287 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:50:13.104303 ignition[677]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:50:13.104312 ignition[677]: failed to fetch config: resource requires networking Apr 17 23:50:13.104688 ignition[677]: Ignition finished successfully Apr 17 23:50:13.115542 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:50:13.118786 systemd-networkd[767]: eth0: DHCPv4 address 10.244.23.222/30, gateway 10.244.23.221 acquired from 10.244.23.221 Apr 17 23:50:13.137931 ignition[775]: Ignition 2.19.0 Apr 17 23:50:13.137953 ignition[775]: Stage: fetch Apr 17 23:50:13.138222 ignition[775]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:13.138243 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:13.138424 ignition[775]: parsed url from cmdline: "" Apr 17 23:50:13.138431 ignition[775]: no config URL provided Apr 17 23:50:13.138441 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:50:13.138458 ignition[775]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:50:13.138615 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 17 23:50:13.138664 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 17 23:50:13.138827 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 17 23:50:13.153950 ignition[775]: GET result: OK Apr 17 23:50:13.154111 ignition[775]: parsing config with SHA512: d53959dc15aeaed3dae3a6797a025de5c2b259743074d305c514bb8b9af0916f94f71986cca23f085923ef542a255d18a563e4789e3d219e4256ce470072bfb6 Apr 17 23:50:13.160399 unknown[775]: fetched base config from "system" Apr 17 23:50:13.160421 unknown[775]: fetched base config from "system" Apr 17 23:50:13.161000 ignition[775]: fetch: fetch complete Apr 17 23:50:13.160430 unknown[775]: fetched user config from "openstack" Apr 17 23:50:13.161009 ignition[775]: fetch: fetch passed Apr 17 23:50:13.163027 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:50:13.161074 ignition[775]: Ignition finished successfully Apr 17 23:50:13.176133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:50:13.202076 ignition[782]: Ignition 2.19.0 Apr 17 23:50:13.202095 ignition[782]: Stage: kargs Apr 17 23:50:13.202353 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:13.202373 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:13.203525 ignition[782]: kargs: kargs passed Apr 17 23:50:13.208252 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:50:13.203599 ignition[782]: Ignition finished successfully Apr 17 23:50:13.214932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:50:13.234393 ignition[790]: Ignition 2.19.0 Apr 17 23:50:13.234418 ignition[790]: Stage: disks Apr 17 23:50:13.234702 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:13.234747 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:13.237164 ignition[790]: disks: disks passed Apr 17 23:50:13.239277 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:50:13.237276 ignition[790]: Ignition finished successfully Apr 17 23:50:13.240363 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:50:13.241093 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:50:13.241849 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:50:13.242598 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:50:13.244044 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:50:13.250973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:50:13.273466 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:50:13.279676 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:50:13.286253 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:50:13.404765 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:50:13.406382 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:50:13.408499 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:50:13.421875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:50:13.424711 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:50:13.426629 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:50:13.429926 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 17 23:50:13.431951 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:50:13.434787 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Apr 17 23:50:13.434830 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:13.434851 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:13.434870 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:13.433367 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:50:13.446772 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:13.449449 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:50:13.451458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:50:13.464156 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:50:13.558207 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:50:13.566004 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:50:13.574384 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:50:13.581286 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:50:13.691155 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:50:13.705897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:50:13.711999 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:50:13.722751 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:13.756617 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:50:13.759050 ignition[922]: INFO : Ignition 2.19.0 Apr 17 23:50:13.759050 ignition[922]: INFO : Stage: mount Apr 17 23:50:13.761006 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:13.761006 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:13.764808 ignition[922]: INFO : mount: mount passed Apr 17 23:50:13.764808 ignition[922]: INFO : Ignition finished successfully Apr 17 23:50:13.763779 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:50:13.845373 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:50:15.018561 systemd-networkd[767]: eth0: Gained IPv6LL Apr 17 23:50:16.529527 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:5f7:24:19ff:fef4:17de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:5f7:24:19ff:fef4:17de/64 assigned by NDisc. Apr 17 23:50:16.529545 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 17 23:50:20.619864 coreos-metadata[808]: Apr 17 23:50:20.619 WARN failed to locate config-drive, using the metadata service API instead Apr 17 23:50:20.645302 coreos-metadata[808]: Apr 17 23:50:20.645 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 17 23:50:20.658039 coreos-metadata[808]: Apr 17 23:50:20.657 INFO Fetch successful Apr 17 23:50:20.658964 coreos-metadata[808]: Apr 17 23:50:20.658 INFO wrote hostname srv-mc367.gb1.brightbox.com to /sysroot/etc/hostname Apr 17 23:50:20.661324 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 17 23:50:20.661507 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 17 23:50:20.669859 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:50:20.692950 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:50:20.706789 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 17 23:50:20.708783 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:20.710107 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:20.711967 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:20.718823 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:20.720138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:50:20.760677 ignition[957]: INFO : Ignition 2.19.0 Apr 17 23:50:20.762802 ignition[957]: INFO : Stage: files Apr 17 23:50:20.762802 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:20.762802 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:20.765504 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:50:20.765504 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:50:20.765504 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:50:20.770223 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:50:20.771255 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:50:20.772278 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:50:20.771595 unknown[957]: wrote ssh authorized keys file for user: core Apr 17 23:50:20.774589 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:50:20.774589 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:50:21.020606 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:50:21.321215 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:50:21.321215 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:50:21.324150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:50:21.738487 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:50:24.753872 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:50:24.753872 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:50:24.758348 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:50:24.758348 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:50:24.758348 ignition[957]: INFO : files: files passed Apr 17 23:50:24.758348 ignition[957]: INFO : Ignition finished successfully Apr 17 23:50:24.759048 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:50:24.771098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:50:24.777167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:50:24.778749 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:50:24.778930 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:50:24.803305 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:24.803305 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:24.806841 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:24.809392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:50:24.811002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:50:24.818020 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:50:24.862750 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:50:24.862960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:50:24.864862 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:50:24.866187 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:50:24.867954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:50:24.873193 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:50:24.899692 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:50:24.904938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:50:24.930377 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:50:24.931372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:24.933090 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:50:24.934680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:50:24.934913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:50:24.936841 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:50:24.937845 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:50:24.939353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:50:24.940746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:50:24.942110 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:50:24.943875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:50:24.945339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:50:24.947055 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:50:24.948569 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:50:24.950175 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:50:24.951709 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:50:24.952088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:50:24.953665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:24.954697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:24.956283 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:50:24.956486 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:24.958003 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:50:24.958254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:50:24.960138 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:50:24.960311 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:50:24.961258 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:50:24.961413 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:50:24.970044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:50:24.971870 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:50:24.972592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:50:24.972805 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:24.978948 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:50:24.979143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:50:24.991931 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:50:24.992109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:50:25.005181 ignition[1009]: INFO : Ignition 2.19.0 Apr 17 23:50:25.005181 ignition[1009]: INFO : Stage: umount Apr 17 23:50:25.015886 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:25.015886 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 17 23:50:25.015886 ignition[1009]: INFO : umount: umount passed Apr 17 23:50:25.015886 ignition[1009]: INFO : Ignition finished successfully Apr 17 23:50:25.018147 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:50:25.019134 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:50:25.019765 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:50:25.024492 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:50:25.024684 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:50:25.027001 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:50:25.027104 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:50:25.028573 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:50:25.028649 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:50:25.030017 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:50:25.030099 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:50:25.031456 systemd[1]: Stopped target network.target - Network. Apr 17 23:50:25.032777 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:50:25.032855 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:50:25.034273 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:50:25.035610 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:50:25.037958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:25.039092 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:50:25.040459 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:50:25.042074 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:50:25.042153 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:50:25.043755 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:50:25.043820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:50:25.045142 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:50:25.045225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:50:25.046692 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:50:25.046806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:50:25.048147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:50:25.048217 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:50:25.049958 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:50:25.053292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:50:25.055134 systemd-networkd[767]: eth0: DHCPv6 lease lost Apr 17 23:50:25.058034 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:50:25.058359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:50:25.061172 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:50:25.061358 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:50:25.068438 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:50:25.068756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:25.075901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:50:25.076705 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:50:25.076818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:50:25.079540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:50:25.079626 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:25.084168 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:50:25.084283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:25.085691 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:50:25.085925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:50:25.087427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:25.106560 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:50:25.107154 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:25.108963 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:50:25.109131 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:50:25.111041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:50:25.111123 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:25.112375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:50:25.112435 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:25.113909 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:50:25.114008 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:50:25.121697 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:50:25.122038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:50:25.124919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:50:25.125040 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:25.132972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:50:25.133812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:50:25.133899 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:25.135782 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:50:25.135857 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:50:25.138911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:50:25.139024 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:25.140681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:50:25.140798 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:25.160074 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:50:25.160287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:50:25.163396 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:50:25.176077 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:50:25.189623 systemd[1]: Switching root. Apr 17 23:50:25.227996 systemd-journald[203]: Journal stopped Apr 17 23:50:26.747442 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Apr 17 23:50:26.747549 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:50:26.747585 kernel: SELinux: policy capability open_perms=1 Apr 17 23:50:26.747615 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:50:26.747635 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:50:26.747661 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:50:26.747695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:50:26.747714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:50:26.747757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:50:26.747776 kernel: audit: type=1403 audit(1776469825.479:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:50:26.747805 systemd[1]: Successfully loaded SELinux policy in 53.177ms. Apr 17 23:50:26.747849 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.775ms. Apr 17 23:50:26.747872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:50:26.747892 systemd[1]: Detected virtualization kvm. Apr 17 23:50:26.747926 systemd[1]: Detected architecture x86-64. Apr 17 23:50:26.747949 systemd[1]: Detected first boot. Apr 17 23:50:26.747982 systemd[1]: Hostname set to . Apr 17 23:50:26.748003 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:50:26.748030 zram_generator::config[1052]: No configuration found. Apr 17 23:50:26.748051 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:50:26.748077 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:50:26.748098 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:50:26.749734 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:50:26.749773 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:50:26.749803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:50:26.749824 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:50:26.749844 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:50:26.749875 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:50:26.749910 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:50:26.749932 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:50:26.749964 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:50:26.750002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:26.750025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:26.750045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:50:26.750065 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:50:26.750086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:50:26.750107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:50:26.750133 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:50:26.750162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:26.750183 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:50:26.750215 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:50:26.750237 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:50:26.750257 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:50:26.750277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:26.750297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:50:26.750317 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:50:26.750349 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:50:26.750370 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:50:26.750390 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:50:26.750409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:26.750438 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:26.750458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:26.750490 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:50:26.750523 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:50:26.750568 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:50:26.750590 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:50:26.750611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:26.750638 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:50:26.750658 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:50:26.750679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:50:26.750699 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:50:26.752136 systemd[1]: Reached target machines.target - Containers. Apr 17 23:50:26.752161 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:50:26.752183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:50:26.752204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:50:26.752225 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:50:26.752246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:50:26.752266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:50:26.752285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:50:26.752323 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:50:26.752344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:50:26.752366 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:50:26.752393 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:50:26.752415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:50:26.752435 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:50:26.752455 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:50:26.752475 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:50:26.752495 kernel: loop: module loaded Apr 17 23:50:26.752529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:50:26.752550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:50:26.752571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:50:26.752600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:50:26.752620 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:50:26.752640 systemd[1]: Stopped verity-setup.service. Apr 17 23:50:26.752661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:26.752681 kernel: fuse: init (API version 7.39) Apr 17 23:50:26.752712 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:50:26.752759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:50:26.752782 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:50:26.752802 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:50:26.752837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:50:26.752871 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:50:26.752899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:50:26.752987 systemd-journald[1152]: Collecting audit messages is disabled. Apr 17 23:50:26.753043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:26.753065 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:50:26.753086 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:50:26.753106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:50:26.753140 systemd-journald[1152]: Journal started Apr 17 23:50:26.753179 systemd-journald[1152]: Runtime Journal (/run/log/journal/f35a8cc9a4c64762b677def7369f7fbb) is 4.7M, max 38.0M, 33.2M free. Apr 17 23:50:26.335195 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:50:26.359558 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:50:26.360309 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:50:26.756777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:50:26.761801 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:50:26.762444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:50:26.762746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:50:26.767933 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:50:26.768162 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:50:26.769316 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:50:26.769600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:50:26.771069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:26.772373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:50:26.773532 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:50:26.796641 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:50:26.809873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:50:26.815756 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:50:26.816587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:50:26.816644 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:50:26.819915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:50:26.830006 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:50:26.835082 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:50:26.836047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:50:26.845799 kernel: ACPI: bus type drm_connector registered Apr 17 23:50:26.845942 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:50:26.854963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:50:26.855978 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:50:26.861283 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:50:26.864867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:50:26.872936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:50:26.879188 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:50:26.889855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:50:26.896603 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:50:26.896916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:50:26.898139 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:50:26.900337 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:50:26.903019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:50:26.917579 systemd-journald[1152]: Time spent on flushing to /var/log/journal/f35a8cc9a4c64762b677def7369f7fbb is 188.743ms for 1138 entries. Apr 17 23:50:26.917579 systemd-journald[1152]: System Journal (/var/log/journal/f35a8cc9a4c64762b677def7369f7fbb) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:50:27.140991 systemd-journald[1152]: Received client request to flush runtime journal. Apr 17 23:50:27.141101 kernel: loop0: detected capacity change from 0 to 217752 Apr 17 23:50:27.141157 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:50:27.143142 kernel: loop1: detected capacity change from 0 to 142488 Apr 17 23:50:27.143195 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:50:26.972202 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:50:26.973272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:50:26.980432 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:50:27.040026 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Apr 17 23:50:27.040048 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Apr 17 23:50:27.058491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:27.082039 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:50:27.084783 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:50:27.101546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:50:27.116321 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:50:27.147994 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:50:27.197198 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:27.207645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:50:27.225996 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:50:27.234161 kernel: loop3: detected capacity change from 0 to 8 Apr 17 23:50:27.249265 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:50:27.265747 kernel: loop4: detected capacity change from 0 to 217752 Apr 17 23:50:27.273538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:50:27.291974 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:50:27.321832 kernel: loop6: detected capacity change from 0 to 140768 Apr 17 23:50:27.342545 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 17 23:50:27.343140 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 17 23:50:27.360141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:27.367585 kernel: loop7: detected capacity change from 0 to 8 Apr 17 23:50:27.369066 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 17 23:50:27.369970 (sd-merge)[1212]: Merged extensions into '/usr'. Apr 17 23:50:27.384920 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:50:27.384963 systemd[1]: Reloading... Apr 17 23:50:27.517811 zram_generator::config[1240]: No configuration found. Apr 17 23:50:27.716560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:50:27.786210 systemd[1]: Reloading finished in 400 ms. Apr 17 23:50:27.800460 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:50:27.819582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:50:27.822187 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:50:27.838077 systemd[1]: Starting ensure-sysext.service... Apr 17 23:50:27.843743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:50:27.855464 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:50:27.855686 systemd[1]: Reloading... Apr 17 23:50:27.900456 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:50:27.901131 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:50:27.902641 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:50:27.905146 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 17 23:50:27.905270 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 17 23:50:27.911886 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:50:27.911905 systemd-tmpfiles[1297]: Skipping /boot Apr 17 23:50:27.936952 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:50:27.936975 systemd-tmpfiles[1297]: Skipping /boot Apr 17 23:50:27.999768 zram_generator::config[1323]: No configuration found. Apr 17 23:50:28.196058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:50:28.265897 systemd[1]: Reloading finished in 409 ms. Apr 17 23:50:28.292582 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:50:28.297329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:50:28.314991 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:50:28.329689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:50:28.337999 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:50:28.345651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:50:28.355009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:28.365000 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:50:28.375026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.376514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:50:28.385198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:50:28.393093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:50:28.398032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:50:28.399163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:50:28.406815 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:50:28.408268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.410326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:50:28.411809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:50:28.414791 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:50:28.424611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.425011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:50:28.433853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:50:28.434898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:50:28.444016 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:50:28.444960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.458192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.458594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:50:28.466749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:50:28.468814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:50:28.469142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:28.470663 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:50:28.472365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:50:28.483863 systemd[1]: Finished ensure-sysext.service. Apr 17 23:50:28.484248 augenrules[1412]: No rules Apr 17 23:50:28.487405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:50:28.490318 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:50:28.509416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:50:28.513541 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:50:28.513805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:50:28.523109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:50:28.525198 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:50:28.545690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:50:28.546468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:50:28.548554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:50:28.557594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:50:28.557855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:50:28.560378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:50:28.567015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:50:28.568423 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:50:28.580609 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Apr 17 23:50:28.631412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:28.641016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:50:28.641915 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:50:28.644190 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:50:28.705704 systemd-resolved[1391]: Positive Trust Anchors: Apr 17 23:50:28.709238 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:50:28.709449 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:50:28.725133 systemd-resolved[1391]: Using system hostname 'srv-mc367.gb1.brightbox.com'. Apr 17 23:50:28.729535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:50:28.730872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:50:28.749868 systemd-networkd[1433]: lo: Link UP Apr 17 23:50:28.750756 systemd-networkd[1433]: lo: Gained carrier Apr 17 23:50:28.752294 systemd-networkd[1433]: Enumeration completed Apr 17 23:50:28.752593 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:50:28.754100 systemd[1]: Reached target network.target - Network. Apr 17 23:50:28.762931 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:50:28.777185 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:50:28.843825 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1446) Apr 17 23:50:28.922844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:50:28.927200 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:28.927213 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:50:28.933655 systemd-networkd[1433]: eth0: Link UP Apr 17 23:50:28.933671 systemd-networkd[1433]: eth0: Gained carrier Apr 17 23:50:28.933704 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:28.935050 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:50:28.953812 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:50:28.962877 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:50:28.974849 systemd-networkd[1433]: eth0: DHCPv4 address 10.244.23.222/30, gateway 10.244.23.221 acquired from 10.244.23.221 Apr 17 23:50:28.976301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:50:28.976752 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Apr 17 23:50:28.997170 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:50:29.043754 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 17 23:50:29.056751 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:50:29.066781 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:50:29.067134 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:50:29.127478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:29.295140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:29.329530 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:50:29.335999 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:50:29.364053 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:50:29.401536 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:50:29.403544 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:29.404430 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:50:29.405416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:50:29.406487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:50:29.407620 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:50:29.408564 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:50:29.409425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:50:29.410253 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:50:29.410315 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:50:29.411100 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:50:29.412861 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:50:29.422492 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:50:29.433974 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:50:29.437237 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:50:29.439044 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:50:29.440159 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:50:29.441034 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:50:29.441932 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:50:29.441984 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:50:29.456467 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:50:29.460996 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:50:29.465019 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:50:29.467014 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:50:29.470950 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:50:29.481984 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:50:29.483882 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:50:29.487929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:50:29.495861 jq[1477]: false Apr 17 23:50:29.497860 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:50:29.503162 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:50:29.513977 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:50:29.526561 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:50:29.528978 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:50:29.530874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:50:29.537958 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:50:29.542920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:50:29.550039 extend-filesystems[1478]: Found loop4 Apr 17 23:50:29.550039 extend-filesystems[1478]: Found loop5 Apr 17 23:50:29.550039 extend-filesystems[1478]: Found loop6 Apr 17 23:50:29.550039 extend-filesystems[1478]: Found loop7 Apr 17 23:50:29.603852 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 17 23:50:29.545167 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda1 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda2 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda3 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found usr Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda4 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda6 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda7 Apr 17 23:50:29.608210 extend-filesystems[1478]: Found vda9 Apr 17 23:50:29.608210 extend-filesystems[1478]: Checking size of /dev/vda9 Apr 17 23:50:29.608210 extend-filesystems[1478]: Resized partition /dev/vda9 Apr 17 23:50:29.558442 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:50:29.664054 extend-filesystems[1500]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:50:29.558810 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:50:29.584312 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:50:29.585374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:50:29.683483 tar[1492]: linux-amd64/LICENSE Apr 17 23:50:29.683483 tar[1492]: linux-amd64/helm Apr 17 23:50:29.690851 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:50:29.697052 jq[1488]: true Apr 17 23:50:29.708040 update_engine[1487]: I20260417 23:50:29.705878 1487 main.cc:92] Flatcar Update Engine starting Apr 17 23:50:29.709489 dbus-daemon[1476]: [system] SELinux support is enabled Apr 17 23:50:29.711823 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:50:29.717344 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 23:50:29.717398 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:50:29.717842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:50:29.717897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:50:29.720268 systemd-logind[1486]: New seat seat0. Apr 17 23:50:29.724161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:50:29.724197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:50:29.727225 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:50:29.729451 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:50:29.729716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:50:29.737612 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:50:29.741609 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:50:29.759048 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:50:29.770500 jq[1511]: true Apr 17 23:50:29.780720 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:50:29.783011 update_engine[1487]: I20260417 23:50:29.782063 1487 update_check_scheduler.cc:74] Next update check in 6m49s Apr 17 23:50:29.788783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1442) Apr 17 23:50:29.817984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:50:29.992795 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 17 23:50:30.021816 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:50:30.021816 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 17 23:50:30.021816 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 17 23:50:30.039217 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Apr 17 23:50:30.040400 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:50:30.040780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:50:30.065651 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:50:30.067580 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:50:30.084841 systemd[1]: Starting sshkeys.service... Apr 17 23:50:30.130658 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:50:30.131166 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:50:30.138035 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:50:30.146649 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:50:30.154214 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:50:30.164205 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:50:30.188061 systemd-networkd[1433]: eth0: Gained IPv6LL Apr 17 23:50:30.193787 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Apr 17 23:50:30.201914 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:50:30.205044 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:50:30.212350 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:50:30.224214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:50:30.228199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:50:30.245295 polkitd[1548]: Started polkitd version 121 Apr 17 23:50:30.276086 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:50:30.276339 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:50:30.277150 polkitd[1548]: Finished loading, compiling and executing 2 rules Apr 17 23:50:30.281102 containerd[1503]: time="2026-04-17T23:50:30.280961216Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:50:30.287279 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:50:30.287965 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:50:30.287677 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:50:30.323072 systemd-hostnamed[1516]: Hostname set to (static) Apr 17 23:50:30.341577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:50:30.383331 containerd[1503]: time="2026-04-17T23:50:30.383257406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.389391295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.389450960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.389489894Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.389855858Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.389909548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.390041287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:50:30.390971 containerd[1503]: time="2026-04-17T23:50:30.390066067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.393729 containerd[1503]: time="2026-04-17T23:50:30.392357295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:50:30.395073 containerd[1503]: time="2026-04-17T23:50:30.395037179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.395129 containerd[1503]: time="2026-04-17T23:50:30.395096799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:50:30.395176 containerd[1503]: time="2026-04-17T23:50:30.395121301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.395342 containerd[1503]: time="2026-04-17T23:50:30.395308425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.396979 containerd[1503]: time="2026-04-17T23:50:30.396947638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:50:30.397167 containerd[1503]: time="2026-04-17T23:50:30.397133026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:50:30.397219 containerd[1503]: time="2026-04-17T23:50:30.397166689Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:50:30.397333 containerd[1503]: time="2026-04-17T23:50:30.397306460Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:50:30.397448 containerd[1503]: time="2026-04-17T23:50:30.397422518Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404008532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404098222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404132234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404158870Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404199888Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404404646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:50:30.404756 containerd[1503]: time="2026-04-17T23:50:30.404746455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:50:30.405051 containerd[1503]: time="2026-04-17T23:50:30.404953440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:50:30.405051 containerd[1503]: time="2026-04-17T23:50:30.404981640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:50:30.405051 containerd[1503]: time="2026-04-17T23:50:30.405003912Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:50:30.405051 containerd[1503]: time="2026-04-17T23:50:30.405026624Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405054305Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405082665Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405105533Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405127281Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405146787Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405187 containerd[1503]: time="2026-04-17T23:50:30.405170688Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405189902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405232748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405256220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405290321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405313700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405333130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405364521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405393 containerd[1503]: time="2026-04-17T23:50:30.405387721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405414651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405437216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405466753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405484417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405509220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405541816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405568124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405608298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405629861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.405812 containerd[1503]: time="2026-04-17T23:50:30.405653108Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405824648Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405860987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405880131Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405914251Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405947862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.405997073Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.406024548Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:50:30.406180 containerd[1503]: time="2026-04-17T23:50:30.406045538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:50:30.410791 containerd[1503]: time="2026-04-17T23:50:30.408576409Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:50:30.410791 containerd[1503]: time="2026-04-17T23:50:30.408683804Z" level=info msg="Connect containerd service" Apr 17 23:50:30.410791 containerd[1503]: time="2026-04-17T23:50:30.410470171Z" level=info msg="using legacy CRI server" Apr 17 23:50:30.410791 containerd[1503]: time="2026-04-17T23:50:30.410496694Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:50:30.410791 containerd[1503]: time="2026-04-17T23:50:30.410712669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:50:30.413238 containerd[1503]: time="2026-04-17T23:50:30.413200205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:50:30.415046 containerd[1503]: time="2026-04-17T23:50:30.413467555Z" level=info msg="Start subscribing containerd event" Apr 17 23:50:30.415046 containerd[1503]: time="2026-04-17T23:50:30.413587071Z" level=info msg="Start recovering state" Apr 17 23:50:30.415046 containerd[1503]: time="2026-04-17T23:50:30.413977123Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:50:30.415046 containerd[1503]: time="2026-04-17T23:50:30.414063283Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:50:30.415748 containerd[1503]: time="2026-04-17T23:50:30.415713528Z" level=info msg="Start event monitor" Apr 17 23:50:30.416025 containerd[1503]: time="2026-04-17T23:50:30.415878299Z" level=info msg="Start snapshots syncer" Apr 17 23:50:30.416025 containerd[1503]: time="2026-04-17T23:50:30.415924378Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:50:30.416025 containerd[1503]: time="2026-04-17T23:50:30.415946568Z" level=info msg="Start streaming server" Apr 17 23:50:30.424366 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:50:30.428064 containerd[1503]: time="2026-04-17T23:50:30.427799982Z" level=info msg="containerd successfully booted in 0.163328s" Apr 17 23:50:30.630624 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:50:30.682187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:50:30.694202 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:50:30.719122 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:50:30.719681 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:50:30.730991 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:50:30.759607 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:50:30.772135 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:50:30.783352 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:50:30.785592 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:50:30.951038 tar[1492]: linux-amd64/README.md Apr 17 23:50:30.967536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:50:31.427050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:50:31.449752 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:50:31.694212 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Apr 17 23:50:31.697293 systemd-networkd[1433]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:5f7:24:19ff:fef4:17de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:5f7:24:19ff:fef4:17de/64 assigned by NDisc. Apr 17 23:50:31.697678 systemd-networkd[1433]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 17 23:50:31.997824 kubelet[1598]: E0417 23:50:31.997634 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:50:32.000120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:50:32.000463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:50:32.001059 systemd[1]: kubelet.service: Consumed 1.020s CPU time. Apr 17 23:50:32.746977 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Apr 17 23:50:34.191218 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:50:34.202276 systemd[1]: Started sshd@0-10.244.23.222:22-4.175.71.9:57508.service - OpenSSH per-connection server daemon (4.175.71.9:57508). Apr 17 23:50:34.340543 sshd[1610]: Accepted publickey for core from 4.175.71.9 port 57508 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:34.343857 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:34.358491 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:50:34.372296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:50:34.378078 systemd-logind[1486]: New session 1 of user core. Apr 17 23:50:34.396591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:50:34.414278 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:50:34.431395 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:50:34.581642 systemd[1614]: Queued start job for default target default.target. Apr 17 23:50:34.590471 systemd[1614]: Created slice app.slice - User Application Slice. Apr 17 23:50:34.590522 systemd[1614]: Reached target paths.target - Paths. Apr 17 23:50:34.590546 systemd[1614]: Reached target timers.target - Timers. Apr 17 23:50:34.592946 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:50:34.615247 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:50:34.616406 systemd[1614]: Reached target sockets.target - Sockets. Apr 17 23:50:34.616544 systemd[1614]: Reached target basic.target - Basic System. Apr 17 23:50:34.616812 systemd[1614]: Reached target default.target - Main User Target. Apr 17 23:50:34.616876 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:50:34.617106 systemd[1614]: Startup finished in 176ms. Apr 17 23:50:34.631127 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:50:34.762226 systemd[1]: Started sshd@1-10.244.23.222:22-4.175.71.9:57510.service - OpenSSH per-connection server daemon (4.175.71.9:57510). Apr 17 23:50:34.887872 sshd[1625]: Accepted publickey for core from 4.175.71.9 port 57510 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:34.890988 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:34.898639 systemd-logind[1486]: New session 2 of user core. Apr 17 23:50:34.909276 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:50:35.015233 sshd[1625]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:35.019851 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:50:35.020952 systemd[1]: sshd@1-10.244.23.222:22-4.175.71.9:57510.service: Deactivated successfully. Apr 17 23:50:35.024171 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:50:35.027168 systemd-logind[1486]: Removed session 2. Apr 17 23:50:35.044336 systemd[1]: Started sshd@2-10.244.23.222:22-4.175.71.9:57524.service - OpenSSH per-connection server daemon (4.175.71.9:57524). Apr 17 23:50:35.166498 sshd[1632]: Accepted publickey for core from 4.175.71.9 port 57524 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:35.168996 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:35.175221 systemd-logind[1486]: New session 3 of user core. Apr 17 23:50:35.182047 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:50:35.286906 sshd[1632]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:35.291895 systemd[1]: sshd@2-10.244.23.222:22-4.175.71.9:57524.service: Deactivated successfully. Apr 17 23:50:35.294650 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:50:35.296908 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:50:35.298605 systemd-logind[1486]: Removed session 3. Apr 17 23:50:35.842620 login[1587]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:50:35.845566 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:50:35.851344 systemd-logind[1486]: New session 4 of user core. Apr 17 23:50:35.863086 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:50:35.867548 systemd-logind[1486]: New session 5 of user core. Apr 17 23:50:35.874103 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:50:37.059797 coreos-metadata[1475]: Apr 17 23:50:37.059 WARN failed to locate config-drive, using the metadata service API instead Apr 17 23:50:37.085510 coreos-metadata[1475]: Apr 17 23:50:37.085 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 17 23:50:37.092381 coreos-metadata[1475]: Apr 17 23:50:37.092 INFO Fetch failed with 404: resource not found Apr 17 23:50:37.092381 coreos-metadata[1475]: Apr 17 23:50:37.092 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 17 23:50:37.093188 coreos-metadata[1475]: Apr 17 23:50:37.093 INFO Fetch successful Apr 17 23:50:37.093188 coreos-metadata[1475]: Apr 17 23:50:37.093 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 17 23:50:37.106468 coreos-metadata[1475]: Apr 17 23:50:37.106 INFO Fetch successful Apr 17 23:50:37.106468 coreos-metadata[1475]: Apr 17 23:50:37.106 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 17 23:50:37.120103 coreos-metadata[1475]: Apr 17 23:50:37.120 INFO Fetch successful Apr 17 23:50:37.120360 coreos-metadata[1475]: Apr 17 23:50:37.120 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 17 23:50:37.138577 coreos-metadata[1475]: Apr 17 23:50:37.138 INFO Fetch successful Apr 17 23:50:37.138892 coreos-metadata[1475]: Apr 17 23:50:37.138 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 17 23:50:37.157918 coreos-metadata[1475]: Apr 17 23:50:37.157 INFO Fetch successful Apr 17 23:50:37.184662 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:50:37.185878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:50:37.314755 coreos-metadata[1547]: Apr 17 23:50:37.314 WARN failed to locate config-drive, using the metadata service API instead Apr 17 23:50:37.336900 coreos-metadata[1547]: Apr 17 23:50:37.336 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 17 23:50:37.363425 coreos-metadata[1547]: Apr 17 23:50:37.363 INFO Fetch successful Apr 17 23:50:37.363588 coreos-metadata[1547]: Apr 17 23:50:37.363 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:50:37.392269 coreos-metadata[1547]: Apr 17 23:50:37.392 INFO Fetch successful Apr 17 23:50:37.394616 unknown[1547]: wrote ssh authorized keys file for user: core Apr 17 23:50:37.430967 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:50:37.431884 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:50:37.434695 systemd[1]: Finished sshkeys.service. Apr 17 23:50:37.437886 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:50:37.438203 systemd[1]: Startup finished in 1.501s (kernel) + 15.722s (initrd) + 12.010s (userspace) = 29.234s. Apr 17 23:50:42.251038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:50:42.258004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:50:42.444957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:50:42.454199 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:50:42.565349 kubelet[1685]: E0417 23:50:42.565098 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:50:42.570017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:50:42.570282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:50:45.320175 systemd[1]: Started sshd@3-10.244.23.222:22-4.175.71.9:48068.service - OpenSSH per-connection server daemon (4.175.71.9:48068). Apr 17 23:50:45.450935 sshd[1694]: Accepted publickey for core from 4.175.71.9 port 48068 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:45.454048 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:45.461771 systemd-logind[1486]: New session 6 of user core. Apr 17 23:50:45.471005 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:50:45.575938 sshd[1694]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:45.580440 systemd[1]: sshd@3-10.244.23.222:22-4.175.71.9:48068.service: Deactivated successfully. Apr 17 23:50:45.582792 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:50:45.583754 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:50:45.585253 systemd-logind[1486]: Removed session 6. Apr 17 23:50:45.607156 systemd[1]: Started sshd@4-10.244.23.222:22-4.175.71.9:48080.service - OpenSSH per-connection server daemon (4.175.71.9:48080). Apr 17 23:50:45.726589 sshd[1701]: Accepted publickey for core from 4.175.71.9 port 48080 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:45.728899 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:45.735369 systemd-logind[1486]: New session 7 of user core. Apr 17 23:50:45.739979 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:50:45.837137 sshd[1701]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:45.842895 systemd[1]: sshd@4-10.244.23.222:22-4.175.71.9:48080.service: Deactivated successfully. Apr 17 23:50:45.845153 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:50:45.846104 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:50:45.847586 systemd-logind[1486]: Removed session 7. Apr 17 23:50:45.867012 systemd[1]: Started sshd@5-10.244.23.222:22-4.175.71.9:48094.service - OpenSSH per-connection server daemon (4.175.71.9:48094). Apr 17 23:50:45.999758 sshd[1708]: Accepted publickey for core from 4.175.71.9 port 48094 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:46.001904 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:46.009957 systemd-logind[1486]: New session 8 of user core. Apr 17 23:50:46.022302 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:50:46.126269 sshd[1708]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:46.132293 systemd[1]: sshd@5-10.244.23.222:22-4.175.71.9:48094.service: Deactivated successfully. Apr 17 23:50:46.135260 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:50:46.136468 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:50:46.137814 systemd-logind[1486]: Removed session 8. Apr 17 23:50:46.154118 systemd[1]: Started sshd@6-10.244.23.222:22-4.175.71.9:48110.service - OpenSSH per-connection server daemon (4.175.71.9:48110). Apr 17 23:50:46.283582 sshd[1715]: Accepted publickey for core from 4.175.71.9 port 48110 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:46.284696 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:46.292113 systemd-logind[1486]: New session 9 of user core. Apr 17 23:50:46.302996 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:50:46.406851 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:50:46.408029 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:50:46.423456 sudo[1718]: pam_unix(sudo:session): session closed for user root Apr 17 23:50:46.440753 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:46.447056 systemd[1]: sshd@6-10.244.23.222:22-4.175.71.9:48110.service: Deactivated successfully. Apr 17 23:50:46.449814 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:50:46.451006 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:50:46.452800 systemd-logind[1486]: Removed session 9. Apr 17 23:50:46.480211 systemd[1]: Started sshd@7-10.244.23.222:22-4.175.71.9:48120.service - OpenSSH per-connection server daemon (4.175.71.9:48120). Apr 17 23:50:46.604586 sshd[1723]: Accepted publickey for core from 4.175.71.9 port 48120 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:46.607046 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:46.613810 systemd-logind[1486]: New session 10 of user core. Apr 17 23:50:46.624067 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:50:46.716006 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:50:46.717277 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:50:46.724664 sudo[1727]: pam_unix(sudo:session): session closed for user root Apr 17 23:50:46.733697 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:50:46.734317 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:50:46.757157 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:50:46.760760 auditctl[1730]: No rules Apr 17 23:50:46.762081 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:50:46.762442 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:50:46.776237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:50:46.811786 augenrules[1748]: No rules Apr 17 23:50:46.814393 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:50:46.815921 sudo[1726]: pam_unix(sudo:session): session closed for user root Apr 17 23:50:46.833086 sshd[1723]: pam_unix(sshd:session): session closed for user core Apr 17 23:50:46.838568 systemd[1]: sshd@7-10.244.23.222:22-4.175.71.9:48120.service: Deactivated successfully. Apr 17 23:50:46.841429 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:50:46.842453 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:50:46.844248 systemd-logind[1486]: Removed session 10. Apr 17 23:50:46.868173 systemd[1]: Started sshd@8-10.244.23.222:22-4.175.71.9:48136.service - OpenSSH per-connection server daemon (4.175.71.9:48136). Apr 17 23:50:46.989424 sshd[1756]: Accepted publickey for core from 4.175.71.9 port 48136 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:50:46.990911 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:50:46.998258 systemd-logind[1486]: New session 11 of user core. Apr 17 23:50:47.009053 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:50:47.098235 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:50:47.099394 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:50:47.596377 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:50:47.596529 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:50:48.037763 dockerd[1774]: time="2026-04-17T23:50:48.037578302Z" level=info msg="Starting up" Apr 17 23:50:48.180248 dockerd[1774]: time="2026-04-17T23:50:48.179830341Z" level=info msg="Loading containers: start." Apr 17 23:50:48.349386 kernel: Initializing XFRM netlink socket Apr 17 23:50:48.408136 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Apr 17 23:50:49.535918 systemd-resolved[1391]: Clock change detected. Flushing caches. Apr 17 23:50:49.537804 systemd-timesyncd[1420]: Contacted time server [2a01:7e00::f03c:93ff:fe0e:ba3]:123 (2.flatcar.pool.ntp.org). Apr 17 23:50:49.537905 systemd-timesyncd[1420]: Initial clock synchronization to Fri 2026-04-17 23:50:49.535696 UTC. Apr 17 23:50:49.579844 systemd-networkd[1433]: docker0: Link UP Apr 17 23:50:49.600182 dockerd[1774]: time="2026-04-17T23:50:49.599051117Z" level=info msg="Loading containers: done." Apr 17 23:50:49.618780 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1938955451-merged.mount: Deactivated successfully. Apr 17 23:50:49.620111 dockerd[1774]: time="2026-04-17T23:50:49.619401607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:50:49.620111 dockerd[1774]: time="2026-04-17T23:50:49.619543007Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:50:49.620111 dockerd[1774]: time="2026-04-17T23:50:49.619719268Z" level=info msg="Daemon has completed initialization" Apr 17 23:50:49.663644 dockerd[1774]: time="2026-04-17T23:50:49.663311642Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:50:49.664505 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:50:50.383382 containerd[1503]: time="2026-04-17T23:50:50.383269180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:50:51.181795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980217919.mount: Deactivated successfully. Apr 17 23:50:52.831865 containerd[1503]: time="2026-04-17T23:50:52.831778087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:52.833468 containerd[1503]: time="2026-04-17T23:50:52.833412728Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579431" Apr 17 23:50:52.834405 containerd[1503]: time="2026-04-17T23:50:52.834366724Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:52.839214 containerd[1503]: time="2026-04-17T23:50:52.839111617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:52.842121 containerd[1503]: time="2026-04-17T23:50:52.840957327Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.457579709s" Apr 17 23:50:52.842121 containerd[1503]: time="2026-04-17T23:50:52.841042438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:50:52.845528 containerd[1503]: time="2026-04-17T23:50:52.845463805Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:50:53.849498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:50:53.861456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:50:54.116493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:50:54.127014 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:50:54.211023 kubelet[1987]: E0417 23:50:54.209338 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:50:54.213526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:50:54.213818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:50:54.926177 containerd[1503]: time="2026-04-17T23:50:54.924853267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:54.928043 containerd[1503]: time="2026-04-17T23:50:54.927998803Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451667" Apr 17 23:50:54.929371 containerd[1503]: time="2026-04-17T23:50:54.929328992Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:54.933474 containerd[1503]: time="2026-04-17T23:50:54.933439241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:54.935328 containerd[1503]: time="2026-04-17T23:50:54.935258512Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 2.089724195s" Apr 17 23:50:54.935421 containerd[1503]: time="2026-04-17T23:50:54.935330589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:50:54.936157 containerd[1503]: time="2026-04-17T23:50:54.936100102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:50:56.357303 containerd[1503]: time="2026-04-17T23:50:56.357229489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:56.360129 containerd[1503]: time="2026-04-17T23:50:56.360080143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555298" Apr 17 23:50:56.361287 containerd[1503]: time="2026-04-17T23:50:56.361227297Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:56.365559 containerd[1503]: time="2026-04-17T23:50:56.365521736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:56.367513 containerd[1503]: time="2026-04-17T23:50:56.367324445Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.431160833s" Apr 17 23:50:56.367513 containerd[1503]: time="2026-04-17T23:50:56.367370757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:50:56.368879 containerd[1503]: time="2026-04-17T23:50:56.368833472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:50:57.821442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421379629.mount: Deactivated successfully. Apr 17 23:50:58.420739 containerd[1503]: time="2026-04-17T23:50:58.420591555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:58.423859 containerd[1503]: time="2026-04-17T23:50:58.423782274Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699933" Apr 17 23:50:58.425833 containerd[1503]: time="2026-04-17T23:50:58.425426137Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:58.428529 containerd[1503]: time="2026-04-17T23:50:58.428462650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:50:58.429546 containerd[1503]: time="2026-04-17T23:50:58.429498577Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 2.060612129s" Apr 17 23:50:58.429638 containerd[1503]: time="2026-04-17T23:50:58.429551761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:50:58.431016 containerd[1503]: time="2026-04-17T23:50:58.430787074Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:50:59.006785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901106094.mount: Deactivated successfully. Apr 17 23:51:00.631231 containerd[1503]: time="2026-04-17T23:51:00.631125207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:00.634015 containerd[1503]: time="2026-04-17T23:51:00.633455778Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556550" Apr 17 23:51:00.637167 containerd[1503]: time="2026-04-17T23:51:00.636493972Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:00.643413 containerd[1503]: time="2026-04-17T23:51:00.643367100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:00.644744 containerd[1503]: time="2026-04-17T23:51:00.644705103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.213874977s" Apr 17 23:51:00.644915 containerd[1503]: time="2026-04-17T23:51:00.644885403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:51:00.646864 containerd[1503]: time="2026-04-17T23:51:00.646822759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:51:01.158065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532146691.mount: Deactivated successfully. Apr 17 23:51:01.166532 containerd[1503]: time="2026-04-17T23:51:01.165290513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:01.166532 containerd[1503]: time="2026-04-17T23:51:01.166507092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Apr 17 23:51:01.169159 containerd[1503]: time="2026-04-17T23:51:01.167724599Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:01.171227 containerd[1503]: time="2026-04-17T23:51:01.171191823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:01.172211 containerd[1503]: time="2026-04-17T23:51:01.172169849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 525.155384ms" Apr 17 23:51:01.172323 containerd[1503]: time="2026-04-17T23:51:01.172257282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:51:01.172948 containerd[1503]: time="2026-04-17T23:51:01.172912499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:51:01.802712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088183742.mount: Deactivated successfully. Apr 17 23:51:02.849915 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:51:03.016112 containerd[1503]: time="2026-04-17T23:51:03.014690930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:03.017679 containerd[1503]: time="2026-04-17T23:51:03.017637528Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644473" Apr 17 23:51:03.019117 containerd[1503]: time="2026-04-17T23:51:03.019080577Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:03.023487 containerd[1503]: time="2026-04-17T23:51:03.023441050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:03.025097 containerd[1503]: time="2026-04-17T23:51:03.025050206Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.852095479s" Apr 17 23:51:03.025210 containerd[1503]: time="2026-04-17T23:51:03.025103506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:51:04.349746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:51:04.358425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:04.610420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:04.625119 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:51:04.692701 kubelet[2156]: E0417 23:51:04.692632 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:51:04.696181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:51:04.696598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:51:04.958383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:04.976375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:05.006608 systemd[1]: Reloading requested from client PID 2170 ('systemctl') (unit session-11.scope)... Apr 17 23:51:05.006653 systemd[1]: Reloading... Apr 17 23:51:05.139673 zram_generator::config[2205]: No configuration found. Apr 17 23:51:05.356775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:51:05.467432 systemd[1]: Reloading finished in 460 ms. Apr 17 23:51:05.544630 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:51:05.544792 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:51:05.545296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:05.554575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:05.830701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:05.845994 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:51:05.941210 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:06.269339 kubelet[2276]: I0417 23:51:06.269201 2276 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:51:06.269339 kubelet[2276]: I0417 23:51:06.269275 2276 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:51:06.269339 kubelet[2276]: I0417 23:51:06.269318 2276 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:51:06.269339 kubelet[2276]: I0417 23:51:06.269329 2276 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:51:06.269719 kubelet[2276]: I0417 23:51:06.269688 2276 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:51:06.279440 kubelet[2276]: I0417 23:51:06.279204 2276 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:51:06.279808 kubelet[2276]: E0417 23:51:06.279776 2276 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.23.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.23.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:51:06.289081 kubelet[2276]: E0417 23:51:06.289025 2276 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:51:06.289312 kubelet[2276]: I0417 23:51:06.289291 2276 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:51:06.296042 kubelet[2276]: I0417 23:51:06.296010 2276 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:51:06.298298 kubelet[2276]: I0417 23:51:06.298200 2276 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:51:06.298618 kubelet[2276]: I0417 23:51:06.298306 2276 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-mc367.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:51:06.298862 kubelet[2276]: I0417 23:51:06.298635 2276 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:51:06.298862 kubelet[2276]: I0417 23:51:06.298654 2276 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:51:06.298961 kubelet[2276]: I0417 23:51:06.298917 2276 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:51:06.300720 kubelet[2276]: I0417 23:51:06.300694 2276 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:51:06.301114 kubelet[2276]: I0417 23:51:06.301091 2276 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:51:06.301204 kubelet[2276]: I0417 23:51:06.301125 2276 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:51:06.301253 kubelet[2276]: I0417 23:51:06.301210 2276 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:51:06.301253 kubelet[2276]: I0417 23:51:06.301236 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:51:06.305035 kubelet[2276]: I0417 23:51:06.305007 2276 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:51:06.308104 kubelet[2276]: I0417 23:51:06.308012 2276 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:51:06.308104 kubelet[2276]: I0417 23:51:06.308062 2276 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:51:06.308264 kubelet[2276]: W0417 23:51:06.308219 2276 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:51:06.313728 kubelet[2276]: I0417 23:51:06.313701 2276 server.go:1257] "Started kubelet" Apr 17 23:51:06.318226 kubelet[2276]: I0417 23:51:06.318196 2276 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:51:06.323734 kubelet[2276]: E0417 23:51:06.322281 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.23.222:6443/api/v1/namespaces/default/events\": dial tcp 10.244.23.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-mc367.gb1.brightbox.com.18a749ee67cf9817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-mc367.gb1.brightbox.com,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-mc367.gb1.brightbox.com,},FirstTimestamp:2026-04-17 23:51:06.313652247 +0000 UTC m=+0.461125956,LastTimestamp:2026-04-17 23:51:06.313652247 +0000 UTC m=+0.461125956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-mc367.gb1.brightbox.com,}" Apr 17 23:51:06.326159 kubelet[2276]: E0417 23:51:06.326110 2276 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:51:06.326812 kubelet[2276]: I0417 23:51:06.326767 2276 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:51:06.328324 kubelet[2276]: I0417 23:51:06.328300 2276 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:51:06.329102 kubelet[2276]: I0417 23:51:06.329065 2276 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:51:06.329599 kubelet[2276]: E0417 23:51:06.329562 2276 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-mc367.gb1.brightbox.com\" not found" Apr 17 23:51:06.330077 kubelet[2276]: I0417 23:51:06.330038 2276 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:51:06.330209 kubelet[2276]: I0417 23:51:06.330187 2276 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:51:06.336200 kubelet[2276]: I0417 23:51:06.335197 2276 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:51:06.336200 kubelet[2276]: I0417 23:51:06.335333 2276 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:51:06.336200 kubelet[2276]: I0417 23:51:06.335603 2276 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:51:06.336200 kubelet[2276]: I0417 23:51:06.336016 2276 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:51:06.341114 kubelet[2276]: I0417 23:51:06.341088 2276 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:51:06.341369 kubelet[2276]: I0417 23:51:06.341341 2276 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:51:06.343300 kubelet[2276]: E0417 23:51:06.343264 2276 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mc367.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.222:6443: connect: connection refused" interval="200ms" Apr 17 23:51:06.343999 kubelet[2276]: I0417 23:51:06.343974 2276 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:51:06.352001 kubelet[2276]: I0417 23:51:06.350261 2276 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:51:06.362698 kubelet[2276]: I0417 23:51:06.362650 2276 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:51:06.362698 kubelet[2276]: I0417 23:51:06.362715 2276 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:51:06.362904 kubelet[2276]: I0417 23:51:06.362783 2276 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:51:06.362904 kubelet[2276]: E0417 23:51:06.362881 2276 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:51:06.389714 kubelet[2276]: I0417 23:51:06.389665 2276 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:51:06.389714 kubelet[2276]: I0417 23:51:06.389696 2276 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:51:06.389714 kubelet[2276]: I0417 23:51:06.389726 2276 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:51:06.427370 kubelet[2276]: I0417 23:51:06.427305 2276 policy_none.go:50] "Start" Apr 17 23:51:06.427661 kubelet[2276]: I0417 23:51:06.427396 2276 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:51:06.427661 kubelet[2276]: I0417 23:51:06.427463 2276 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:51:06.429153 kubelet[2276]: I0417 23:51:06.429110 2276 policy_none.go:44] "Start" Apr 17 23:51:06.430288 kubelet[2276]: E0417 23:51:06.430248 2276 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-mc367.gb1.brightbox.com\" not found" Apr 17 23:51:06.436113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:51:06.452661 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:51:06.458043 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:51:06.463502 kubelet[2276]: E0417 23:51:06.463469 2276 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:51:06.469833 kubelet[2276]: E0417 23:51:06.469807 2276 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:51:06.471275 kubelet[2276]: I0417 23:51:06.470680 2276 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:51:06.471275 kubelet[2276]: I0417 23:51:06.470715 2276 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:51:06.471514 kubelet[2276]: I0417 23:51:06.471493 2276 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:51:06.472642 kubelet[2276]: E0417 23:51:06.472616 2276 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:51:06.472816 kubelet[2276]: E0417 23:51:06.472793 2276 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-mc367.gb1.brightbox.com\" not found" Apr 17 23:51:06.545240 kubelet[2276]: E0417 23:51:06.545029 2276 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mc367.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.222:6443: connect: connection refused" interval="400ms" Apr 17 23:51:06.574466 kubelet[2276]: I0417 23:51:06.574395 2276 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.575094 kubelet[2276]: E0417 23:51:06.575016 2276 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.244.23.222:6443/api/v1/nodes\": dial tcp 10.244.23.222:6443: connect: connection refused" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.700933 systemd[1]: Created slice kubepods-burstable-pod8fb8383dc140231ef795a529ac80737a.slice - libcontainer container kubepods-burstable-pod8fb8383dc140231ef795a529ac80737a.slice. Apr 17 23:51:06.714669 kubelet[2276]: E0417 23:51:06.713457 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.718304 systemd[1]: Created slice kubepods-burstable-pode7d5a22d6023808c0aeb463b3157a2cd.slice - libcontainer container kubepods-burstable-pode7d5a22d6023808c0aeb463b3157a2cd.slice. Apr 17 23:51:06.723324 kubelet[2276]: E0417 23:51:06.722928 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.727754 systemd[1]: Created slice kubepods-burstable-pod707540044191aac7b183288176760407.slice - libcontainer container kubepods-burstable-pod707540044191aac7b183288176760407.slice. Apr 17 23:51:06.730806 kubelet[2276]: E0417 23:51:06.730718 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735512 kubelet[2276]: I0417 23:51:06.735345 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-k8s-certs\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735512 kubelet[2276]: I0417 23:51:06.735401 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735512 kubelet[2276]: I0417 23:51:06.735443 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-ca-certs\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735911 kubelet[2276]: I0417 23:51:06.735506 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-flexvolume-dir\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735911 kubelet[2276]: I0417 23:51:06.735558 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/707540044191aac7b183288176760407-kubeconfig\") pod \"kube-scheduler-srv-mc367.gb1.brightbox.com\" (UID: \"707540044191aac7b183288176760407\") " pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735911 kubelet[2276]: I0417 23:51:06.735590 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-ca-certs\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735911 kubelet[2276]: I0417 23:51:06.735629 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-k8s-certs\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.735911 kubelet[2276]: I0417 23:51:06.735658 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-kubeconfig\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.736152 kubelet[2276]: I0417 23:51:06.735686 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.778887 kubelet[2276]: I0417 23:51:06.778802 2276 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.779297 kubelet[2276]: E0417 23:51:06.779262 2276 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.244.23.222:6443/api/v1/nodes\": dial tcp 10.244.23.222:6443: connect: connection refused" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:06.946916 kubelet[2276]: E0417 23:51:06.946827 2276 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mc367.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.222:6443: connect: connection refused" interval="800ms" Apr 17 23:51:07.020015 containerd[1503]: time="2026-04-17T23:51:07.019568752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-mc367.gb1.brightbox.com,Uid:8fb8383dc140231ef795a529ac80737a,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:07.026564 containerd[1503]: time="2026-04-17T23:51:07.026258031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-mc367.gb1.brightbox.com,Uid:e7d5a22d6023808c0aeb463b3157a2cd,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:07.033093 containerd[1503]: time="2026-04-17T23:51:07.033050190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-mc367.gb1.brightbox.com,Uid:707540044191aac7b183288176760407,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:07.182821 kubelet[2276]: I0417 23:51:07.182265 2276 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:07.182821 kubelet[2276]: E0417 23:51:07.182773 2276 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.244.23.222:6443/api/v1/nodes\": dial tcp 10.244.23.222:6443: connect: connection refused" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:07.585628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301754027.mount: Deactivated successfully. Apr 17 23:51:07.593018 containerd[1503]: time="2026-04-17T23:51:07.591836578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:07.594306 containerd[1503]: time="2026-04-17T23:51:07.594250251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 17 23:51:07.600157 containerd[1503]: time="2026-04-17T23:51:07.599483074Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:07.603479 containerd[1503]: time="2026-04-17T23:51:07.603404568Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:07.605179 containerd[1503]: time="2026-04-17T23:51:07.605098662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:51:07.606546 containerd[1503]: time="2026-04-17T23:51:07.606491402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:51:07.606801 containerd[1503]: time="2026-04-17T23:51:07.606707225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:07.612578 containerd[1503]: time="2026-04-17T23:51:07.612520801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:07.613954 containerd[1503]: time="2026-04-17T23:51:07.613917521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.56841ms" Apr 17 23:51:07.617389 containerd[1503]: time="2026-04-17T23:51:07.617340505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.621607ms" Apr 17 23:51:07.618408 containerd[1503]: time="2026-04-17T23:51:07.618367791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.938656ms" Apr 17 23:51:07.748353 kubelet[2276]: E0417 23:51:07.748251 2276 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mc367.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.222:6443: connect: connection refused" interval="1.6s" Apr 17 23:51:07.879829 containerd[1503]: time="2026-04-17T23:51:07.879382566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:07.879829 containerd[1503]: time="2026-04-17T23:51:07.879529079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:07.879829 containerd[1503]: time="2026-04-17T23:51:07.879563274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.879829 containerd[1503]: time="2026-04-17T23:51:07.879735623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.888175 containerd[1503]: time="2026-04-17T23:51:07.887772865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:07.888175 containerd[1503]: time="2026-04-17T23:51:07.887884978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:07.888175 containerd[1503]: time="2026-04-17T23:51:07.887911707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.888175 containerd[1503]: time="2026-04-17T23:51:07.888072400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.894724 containerd[1503]: time="2026-04-17T23:51:07.894380302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:07.894724 containerd[1503]: time="2026-04-17T23:51:07.894471700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:07.894724 containerd[1503]: time="2026-04-17T23:51:07.894492416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.895097 containerd[1503]: time="2026-04-17T23:51:07.894628912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:07.946480 systemd[1]: Started cri-containerd-70c01350dc4aa9843aec4f36f2cfcad99a7c5af944a63dba5f75ac02c2a97509.scope - libcontainer container 70c01350dc4aa9843aec4f36f2cfcad99a7c5af944a63dba5f75ac02c2a97509. Apr 17 23:51:07.951070 systemd[1]: Started cri-containerd-af9ac70f6e55d795a318be79ba5351ae24aa3928c625c87da661c7f01a21e621.scope - libcontainer container af9ac70f6e55d795a318be79ba5351ae24aa3928c625c87da661c7f01a21e621. Apr 17 23:51:07.959450 systemd[1]: Started cri-containerd-9d9228f9444dad07fb92cb8650b2bf70713c1cdded47c40fff2212bb65890edc.scope - libcontainer container 9d9228f9444dad07fb92cb8650b2bf70713c1cdded47c40fff2212bb65890edc. Apr 17 23:51:07.987872 kubelet[2276]: I0417 23:51:07.987810 2276 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:07.989585 kubelet[2276]: E0417 23:51:07.988331 2276 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.244.23.222:6443/api/v1/nodes\": dial tcp 10.244.23.222:6443: connect: connection refused" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:08.049175 containerd[1503]: time="2026-04-17T23:51:08.046499415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-mc367.gb1.brightbox.com,Uid:8fb8383dc140231ef795a529ac80737a,Namespace:kube-system,Attempt:0,} returns sandbox id \"af9ac70f6e55d795a318be79ba5351ae24aa3928c625c87da661c7f01a21e621\"" Apr 17 23:51:08.071478 containerd[1503]: time="2026-04-17T23:51:08.071416297Z" level=info msg="CreateContainer within sandbox \"af9ac70f6e55d795a318be79ba5351ae24aa3928c625c87da661c7f01a21e621\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:51:08.093382 containerd[1503]: time="2026-04-17T23:51:08.093320120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-mc367.gb1.brightbox.com,Uid:e7d5a22d6023808c0aeb463b3157a2cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c01350dc4aa9843aec4f36f2cfcad99a7c5af944a63dba5f75ac02c2a97509\"" Apr 17 23:51:08.094936 containerd[1503]: time="2026-04-17T23:51:08.094891325Z" level=info msg="CreateContainer within sandbox \"af9ac70f6e55d795a318be79ba5351ae24aa3928c625c87da661c7f01a21e621\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b99a2de0e794a4803d0393839949cb65a1aef1e2413a4de67233d95b2e56431b\"" Apr 17 23:51:08.099492 containerd[1503]: time="2026-04-17T23:51:08.099451104Z" level=info msg="CreateContainer within sandbox \"70c01350dc4aa9843aec4f36f2cfcad99a7c5af944a63dba5f75ac02c2a97509\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:51:08.099690 containerd[1503]: time="2026-04-17T23:51:08.099656228Z" level=info msg="StartContainer for \"b99a2de0e794a4803d0393839949cb65a1aef1e2413a4de67233d95b2e56431b\"" Apr 17 23:51:08.118231 containerd[1503]: time="2026-04-17T23:51:08.118160191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-mc367.gb1.brightbox.com,Uid:707540044191aac7b183288176760407,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d9228f9444dad07fb92cb8650b2bf70713c1cdded47c40fff2212bb65890edc\"" Apr 17 23:51:08.119269 containerd[1503]: time="2026-04-17T23:51:08.119228866Z" level=info msg="CreateContainer within sandbox \"70c01350dc4aa9843aec4f36f2cfcad99a7c5af944a63dba5f75ac02c2a97509\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"925f45799595e63ebdaa75d91f557bfda326433378db67c8148e177378abe8c1\"" Apr 17 23:51:08.120833 containerd[1503]: time="2026-04-17T23:51:08.120791288Z" level=info msg="StartContainer for \"925f45799595e63ebdaa75d91f557bfda326433378db67c8148e177378abe8c1\"" Apr 17 23:51:08.125525 containerd[1503]: time="2026-04-17T23:51:08.125376222Z" level=info msg="CreateContainer within sandbox \"9d9228f9444dad07fb92cb8650b2bf70713c1cdded47c40fff2212bb65890edc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:51:08.159927 containerd[1503]: time="2026-04-17T23:51:08.158339664Z" level=info msg="CreateContainer within sandbox \"9d9228f9444dad07fb92cb8650b2bf70713c1cdded47c40fff2212bb65890edc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d10eb960008c3de20c3282f6828792663dff7dcf00190dbd5d552d482af1a17a\"" Apr 17 23:51:08.159927 containerd[1503]: time="2026-04-17T23:51:08.159236647Z" level=info msg="StartContainer for \"d10eb960008c3de20c3282f6828792663dff7dcf00190dbd5d552d482af1a17a\"" Apr 17 23:51:08.172404 systemd[1]: Started cri-containerd-b99a2de0e794a4803d0393839949cb65a1aef1e2413a4de67233d95b2e56431b.scope - libcontainer container b99a2de0e794a4803d0393839949cb65a1aef1e2413a4de67233d95b2e56431b. Apr 17 23:51:08.184361 systemd[1]: Started cri-containerd-925f45799595e63ebdaa75d91f557bfda326433378db67c8148e177378abe8c1.scope - libcontainer container 925f45799595e63ebdaa75d91f557bfda326433378db67c8148e177378abe8c1. Apr 17 23:51:08.236593 systemd[1]: Started cri-containerd-d10eb960008c3de20c3282f6828792663dff7dcf00190dbd5d552d482af1a17a.scope - libcontainer container d10eb960008c3de20c3282f6828792663dff7dcf00190dbd5d552d482af1a17a. Apr 17 23:51:08.302583 containerd[1503]: time="2026-04-17T23:51:08.302519013Z" level=info msg="StartContainer for \"b99a2de0e794a4803d0393839949cb65a1aef1e2413a4de67233d95b2e56431b\" returns successfully" Apr 17 23:51:08.314926 containerd[1503]: time="2026-04-17T23:51:08.314864477Z" level=info msg="StartContainer for \"925f45799595e63ebdaa75d91f557bfda326433378db67c8148e177378abe8c1\" returns successfully" Apr 17 23:51:08.362166 containerd[1503]: time="2026-04-17T23:51:08.360777705Z" level=info msg="StartContainer for \"d10eb960008c3de20c3282f6828792663dff7dcf00190dbd5d552d482af1a17a\" returns successfully" Apr 17 23:51:08.393107 kubelet[2276]: E0417 23:51:08.393055 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:08.398031 kubelet[2276]: E0417 23:51:08.397990 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:08.403440 kubelet[2276]: E0417 23:51:08.403375 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:08.413068 kubelet[2276]: E0417 23:51:08.412103 2276 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.23.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.23.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:51:09.404542 kubelet[2276]: E0417 23:51:09.404477 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:09.406951 kubelet[2276]: E0417 23:51:09.406621 2276 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:09.592426 kubelet[2276]: I0417 23:51:09.592368 2276 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.227767 kubelet[2276]: E0417 23:51:10.227718 2276 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-mc367.gb1.brightbox.com\" not found" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.306659 kubelet[2276]: I0417 23:51:10.306595 2276 apiserver.go:52] "Watching apiserver" Apr 17 23:51:10.317171 kubelet[2276]: I0417 23:51:10.316105 2276 kubelet_node_status.go:77] "Successfully registered node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.330571 kubelet[2276]: I0417 23:51:10.330524 2276 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:51:10.331391 kubelet[2276]: I0417 23:51:10.330863 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.391870 kubelet[2276]: E0417 23:51:10.391813 2276 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-mc367.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.392187 kubelet[2276]: I0417 23:51:10.392164 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.397758 kubelet[2276]: E0417 23:51:10.397410 2276 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.397758 kubelet[2276]: I0417 23:51:10.397459 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:10.400526 kubelet[2276]: E0417 23:51:10.400497 2276 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-mc367.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:12.191953 kubelet[2276]: I0417 23:51:12.191882 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:12.202774 kubelet[2276]: I0417 23:51:12.202639 2276 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:12.514370 kubelet[2276]: I0417 23:51:12.514102 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:12.525769 kubelet[2276]: I0417 23:51:12.524336 2276 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:12.568104 systemd[1]: Reloading requested from client PID 2563 ('systemctl') (unit session-11.scope)... Apr 17 23:51:12.568152 systemd[1]: Reloading... Apr 17 23:51:12.692249 zram_generator::config[2598]: No configuration found. Apr 17 23:51:12.967270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:51:13.081689 kubelet[2276]: I0417 23:51:13.081605 2276 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:13.095886 kubelet[2276]: I0417 23:51:13.095835 2276 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:13.106021 systemd[1]: Reloading finished in 537 ms. Apr 17 23:51:13.171277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:13.183652 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:51:13.184307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:13.184437 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 125.4M memory peak, 0B memory swap peak. Apr 17 23:51:13.199583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:13.472332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:13.488937 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:51:13.576792 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:13.589720 kubelet[2665]: I0417 23:51:13.588946 2665 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:51:13.589720 kubelet[2665]: I0417 23:51:13.589022 2665 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:51:13.589720 kubelet[2665]: I0417 23:51:13.589050 2665 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:51:13.589720 kubelet[2665]: I0417 23:51:13.589058 2665 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:51:13.589720 kubelet[2665]: I0417 23:51:13.589418 2665 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:51:13.596050 kubelet[2665]: I0417 23:51:13.595542 2665 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:51:13.600790 kubelet[2665]: I0417 23:51:13.600687 2665 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:51:13.612449 kubelet[2665]: E0417 23:51:13.610657 2665 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:51:13.612449 kubelet[2665]: I0417 23:51:13.610754 2665 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:51:13.622055 kubelet[2665]: I0417 23:51:13.621066 2665 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:51:13.624406 kubelet[2665]: I0417 23:51:13.624342 2665 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:51:13.624796 kubelet[2665]: I0417 23:51:13.624408 2665 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-mc367.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:51:13.624796 kubelet[2665]: I0417 23:51:13.624710 2665 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:51:13.624796 kubelet[2665]: I0417 23:51:13.624725 2665 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:51:13.625267 kubelet[2665]: I0417 23:51:13.624817 2665 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:51:13.625267 kubelet[2665]: I0417 23:51:13.625166 2665 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:51:13.625483 kubelet[2665]: I0417 23:51:13.625460 2665 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:51:13.625554 kubelet[2665]: I0417 23:51:13.625493 2665 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:51:13.625554 kubelet[2665]: I0417 23:51:13.625532 2665 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:51:13.625554 kubelet[2665]: I0417 23:51:13.625548 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:51:13.633387 kubelet[2665]: I0417 23:51:13.632637 2665 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:51:13.633939 kubelet[2665]: I0417 23:51:13.633892 2665 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:51:13.633939 kubelet[2665]: I0417 23:51:13.633941 2665 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:51:13.651177 kubelet[2665]: I0417 23:51:13.650103 2665 server.go:1257] "Started kubelet" Apr 17 23:51:13.657317 kubelet[2665]: I0417 23:51:13.655788 2665 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:51:13.664831 kubelet[2665]: I0417 23:51:13.664706 2665 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:51:13.671754 kubelet[2665]: I0417 23:51:13.671259 2665 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:51:13.684466 kubelet[2665]: I0417 23:51:13.683901 2665 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:51:13.692864 kubelet[2665]: I0417 23:51:13.687921 2665 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:51:13.692864 kubelet[2665]: I0417 23:51:13.687968 2665 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:51:13.692864 kubelet[2665]: I0417 23:51:13.688157 2665 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:51:13.692864 kubelet[2665]: I0417 23:51:13.688504 2665 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:51:13.692864 kubelet[2665]: I0417 23:51:13.689053 2665 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:51:13.706838 kubelet[2665]: I0417 23:51:13.703835 2665 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:51:13.738220 kubelet[2665]: E0417 23:51:13.738033 2665 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:51:13.738710 kubelet[2665]: I0417 23:51:13.738591 2665 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:51:13.740218 kubelet[2665]: I0417 23:51:13.739942 2665 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:51:13.741536 kubelet[2665]: I0417 23:51:13.741517 2665 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:51:13.741944 kubelet[2665]: I0417 23:51:13.741894 2665 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:51:13.743568 kubelet[2665]: I0417 23:51:13.743502 2665 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:51:13.743568 kubelet[2665]: I0417 23:51:13.743532 2665 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:51:13.743568 kubelet[2665]: I0417 23:51:13.743568 2665 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:51:13.743970 kubelet[2665]: E0417 23:51:13.743659 2665 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:51:13.844509 kubelet[2665]: E0417 23:51:13.843720 2665 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:51:13.918858 kubelet[2665]: I0417 23:51:13.918701 2665 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:51:13.918858 kubelet[2665]: I0417 23:51:13.918732 2665 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:51:13.918858 kubelet[2665]: I0417 23:51:13.918779 2665 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:51:13.920584 kubelet[2665]: I0417 23:51:13.919124 2665 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:51:13.920584 kubelet[2665]: I0417 23:51:13.919184 2665 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:51:13.920584 kubelet[2665]: I0417 23:51:13.919264 2665 policy_none.go:50] "Start" Apr 17 23:51:13.920584 kubelet[2665]: I0417 23:51:13.919314 2665 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:51:13.920584 kubelet[2665]: I0417 23:51:13.919337 2665 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:51:13.925883 kubelet[2665]: I0417 23:51:13.925822 2665 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:51:13.925883 kubelet[2665]: I0417 23:51:13.925881 2665 policy_none.go:44] "Start" Apr 17 23:51:13.951703 kubelet[2665]: E0417 23:51:13.950704 2665 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:51:13.951703 kubelet[2665]: I0417 23:51:13.951541 2665 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:51:13.952157 kubelet[2665]: I0417 23:51:13.951566 2665 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:51:13.955569 kubelet[2665]: I0417 23:51:13.955055 2665 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:51:13.959102 kubelet[2665]: E0417 23:51:13.959069 2665 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:51:14.047677 kubelet[2665]: I0417 23:51:14.047472 2665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.048701 kubelet[2665]: I0417 23:51:14.047886 2665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.056367 kubelet[2665]: I0417 23:51:14.055958 2665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.068792 kubelet[2665]: I0417 23:51:14.068740 2665 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:14.069550 kubelet[2665]: E0417 23:51:14.068863 2665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.070163 kubelet[2665]: I0417 23:51:14.069592 2665 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:14.070163 kubelet[2665]: E0417 23:51:14.069798 2665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-mc367.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.070163 kubelet[2665]: I0417 23:51:14.069987 2665 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:51:14.070163 kubelet[2665]: E0417 23:51:14.070064 2665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-mc367.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.079583 kubelet[2665]: I0417 23:51:14.079513 2665 kubelet_node_status.go:74] "Attempting to register node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091200 kubelet[2665]: I0417 23:51:14.091116 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-k8s-certs\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091200 kubelet[2665]: I0417 23:51:14.091192 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-flexvolume-dir\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091410 kubelet[2665]: I0417 23:51:14.091228 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-k8s-certs\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091410 kubelet[2665]: I0417 23:51:14.091258 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-ca-certs\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091410 kubelet[2665]: I0417 23:51:14.091287 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fb8383dc140231ef795a529ac80737a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-mc367.gb1.brightbox.com\" (UID: \"8fb8383dc140231ef795a529ac80737a\") " pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091410 kubelet[2665]: I0417 23:51:14.091314 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-ca-certs\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091410 kubelet[2665]: I0417 23:51:14.091342 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-kubeconfig\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091700 kubelet[2665]: I0417 23:51:14.091375 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d5a22d6023808c0aeb463b3157a2cd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-mc367.gb1.brightbox.com\" (UID: \"e7d5a22d6023808c0aeb463b3157a2cd\") " pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.091700 kubelet[2665]: I0417 23:51:14.091423 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/707540044191aac7b183288176760407-kubeconfig\") pod \"kube-scheduler-srv-mc367.gb1.brightbox.com\" (UID: \"707540044191aac7b183288176760407\") " pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.094034 kubelet[2665]: I0417 23:51:14.093511 2665 kubelet_node_status.go:123] "Node was previously registered" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.094034 kubelet[2665]: I0417 23:51:14.093677 2665 kubelet_node_status.go:77] "Successfully registered node" node="srv-mc367.gb1.brightbox.com" Apr 17 23:51:14.628852 kubelet[2665]: I0417 23:51:14.628799 2665 apiserver.go:52] "Watching apiserver" Apr 17 23:51:14.689121 kubelet[2665]: I0417 23:51:14.689049 2665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:51:14.932362 kubelet[2665]: I0417 23:51:14.932064 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-mc367.gb1.brightbox.com" podStartSLOduration=2.932027881 podStartE2EDuration="2.932027881s" podCreationTimestamp="2026-04-17 23:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:14.889456978 +0000 UTC m=+1.386543419" watchObservedRunningTime="2026-04-17 23:51:14.932027881 +0000 UTC m=+1.429114330" Apr 17 23:51:14.960015 kubelet[2665]: I0417 23:51:14.959932 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-mc367.gb1.brightbox.com" podStartSLOduration=1.95991366 podStartE2EDuration="1.95991366s" podCreationTimestamp="2026-04-17 23:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:14.934410628 +0000 UTC m=+1.431497082" watchObservedRunningTime="2026-04-17 23:51:14.95991366 +0000 UTC m=+1.457000119" Apr 17 23:51:14.994111 kubelet[2665]: I0417 23:51:14.993988 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-mc367.gb1.brightbox.com" podStartSLOduration=2.993971992 podStartE2EDuration="2.993971992s" podCreationTimestamp="2026-04-17 23:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:14.961038628 +0000 UTC m=+1.458125078" watchObservedRunningTime="2026-04-17 23:51:14.993971992 +0000 UTC m=+1.491058443" Apr 17 23:51:16.312746 update_engine[1487]: I20260417 23:51:16.312541 1487 update_attempter.cc:509] Updating boot flags... Apr 17 23:51:16.389311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (2727) Apr 17 23:51:16.503164 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (2731) Apr 17 23:51:18.551697 kubelet[2665]: I0417 23:51:18.551458 2665 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:51:18.553960 containerd[1503]: time="2026-04-17T23:51:18.553511332Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:51:18.556576 kubelet[2665]: I0417 23:51:18.555437 2665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:51:19.199820 systemd[1]: Created slice kubepods-besteffort-podd494e7ef_1747_4bd2_96b6_58adec3126ed.slice - libcontainer container kubepods-besteffort-podd494e7ef_1747_4bd2_96b6_58adec3126ed.slice. Apr 17 23:51:19.224060 kubelet[2665]: I0417 23:51:19.223821 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d494e7ef-1747-4bd2-96b6-58adec3126ed-xtables-lock\") pod \"kube-proxy-4zx65\" (UID: \"d494e7ef-1747-4bd2-96b6-58adec3126ed\") " pod="kube-system/kube-proxy-4zx65" Apr 17 23:51:19.224060 kubelet[2665]: I0417 23:51:19.223882 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d494e7ef-1747-4bd2-96b6-58adec3126ed-kube-proxy\") pod \"kube-proxy-4zx65\" (UID: \"d494e7ef-1747-4bd2-96b6-58adec3126ed\") " pod="kube-system/kube-proxy-4zx65" Apr 17 23:51:19.224060 kubelet[2665]: I0417 23:51:19.223908 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d494e7ef-1747-4bd2-96b6-58adec3126ed-lib-modules\") pod \"kube-proxy-4zx65\" (UID: \"d494e7ef-1747-4bd2-96b6-58adec3126ed\") " pod="kube-system/kube-proxy-4zx65" Apr 17 23:51:19.224060 kubelet[2665]: I0417 23:51:19.223934 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9kf6\" (UniqueName: \"kubernetes.io/projected/d494e7ef-1747-4bd2-96b6-58adec3126ed-kube-api-access-t9kf6\") pod \"kube-proxy-4zx65\" (UID: \"d494e7ef-1747-4bd2-96b6-58adec3126ed\") " pod="kube-system/kube-proxy-4zx65" Apr 17 23:51:19.512835 containerd[1503]: time="2026-04-17T23:51:19.512667795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zx65,Uid:d494e7ef-1747-4bd2-96b6-58adec3126ed,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:19.562877 containerd[1503]: time="2026-04-17T23:51:19.562289085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:19.562877 containerd[1503]: time="2026-04-17T23:51:19.562431574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:19.562877 containerd[1503]: time="2026-04-17T23:51:19.562456555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:19.562877 containerd[1503]: time="2026-04-17T23:51:19.562672280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:19.609431 systemd[1]: Started cri-containerd-49863e80770a7779092c5dec1a8e89467c6750309f2f301f139e219a24bef69e.scope - libcontainer container 49863e80770a7779092c5dec1a8e89467c6750309f2f301f139e219a24bef69e. Apr 17 23:51:19.673168 containerd[1503]: time="2026-04-17T23:51:19.672533865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zx65,Uid:d494e7ef-1747-4bd2-96b6-58adec3126ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"49863e80770a7779092c5dec1a8e89467c6750309f2f301f139e219a24bef69e\"" Apr 17 23:51:19.702544 containerd[1503]: time="2026-04-17T23:51:19.701911691Z" level=info msg="CreateContainer within sandbox \"49863e80770a7779092c5dec1a8e89467c6750309f2f301f139e219a24bef69e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:51:19.742915 containerd[1503]: time="2026-04-17T23:51:19.742800729Z" level=info msg="CreateContainer within sandbox \"49863e80770a7779092c5dec1a8e89467c6750309f2f301f139e219a24bef69e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"393bc59553ce5f9a3f0937bcff6598ef16a0b0e574355cc0242eed05dd2d2a2c\"" Apr 17 23:51:19.744913 containerd[1503]: time="2026-04-17T23:51:19.744843235Z" level=info msg="StartContainer for \"393bc59553ce5f9a3f0937bcff6598ef16a0b0e574355cc0242eed05dd2d2a2c\"" Apr 17 23:51:19.786621 systemd[1]: Started cri-containerd-393bc59553ce5f9a3f0937bcff6598ef16a0b0e574355cc0242eed05dd2d2a2c.scope - libcontainer container 393bc59553ce5f9a3f0937bcff6598ef16a0b0e574355cc0242eed05dd2d2a2c. Apr 17 23:51:19.883495 systemd[1]: Created slice kubepods-besteffort-pod3fa8ad89_0178_4edb_9cca_b8a83fd62e90.slice - libcontainer container kubepods-besteffort-pod3fa8ad89_0178_4edb_9cca_b8a83fd62e90.slice. Apr 17 23:51:19.921304 containerd[1503]: time="2026-04-17T23:51:19.921172709Z" level=info msg="StartContainer for \"393bc59553ce5f9a3f0937bcff6598ef16a0b0e574355cc0242eed05dd2d2a2c\" returns successfully" Apr 17 23:51:19.929993 kubelet[2665]: I0417 23:51:19.929922 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx7ql\" (UniqueName: \"kubernetes.io/projected/3fa8ad89-0178-4edb-9cca-b8a83fd62e90-kube-api-access-qx7ql\") pod \"tigera-operator-6cf4cccc57-fz8tr\" (UID: \"3fa8ad89-0178-4edb-9cca-b8a83fd62e90\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fz8tr" Apr 17 23:51:19.929993 kubelet[2665]: I0417 23:51:19.929998 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fa8ad89-0178-4edb-9cca-b8a83fd62e90-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-fz8tr\" (UID: \"3fa8ad89-0178-4edb-9cca-b8a83fd62e90\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fz8tr" Apr 17 23:51:20.190943 containerd[1503]: time="2026-04-17T23:51:20.190868512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fz8tr,Uid:3fa8ad89-0178-4edb-9cca-b8a83fd62e90,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:51:20.236372 containerd[1503]: time="2026-04-17T23:51:20.234937938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:20.236372 containerd[1503]: time="2026-04-17T23:51:20.235038477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:20.236372 containerd[1503]: time="2026-04-17T23:51:20.235072858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:20.236372 containerd[1503]: time="2026-04-17T23:51:20.235332691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:20.275364 systemd[1]: Started cri-containerd-622853fddf310f0f797fa6cee3b96aef1e232142c27f7cabda6f3a7dc41c4ad6.scope - libcontainer container 622853fddf310f0f797fa6cee3b96aef1e232142c27f7cabda6f3a7dc41c4ad6. Apr 17 23:51:20.355987 containerd[1503]: time="2026-04-17T23:51:20.353584333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fz8tr,Uid:3fa8ad89-0178-4edb-9cca-b8a83fd62e90,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"622853fddf310f0f797fa6cee3b96aef1e232142c27f7cabda6f3a7dc41c4ad6\"" Apr 17 23:51:20.361574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603389013.mount: Deactivated successfully. Apr 17 23:51:20.366868 containerd[1503]: time="2026-04-17T23:51:20.365928355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:51:20.893450 kubelet[2665]: I0417 23:51:20.893196 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-4zx65" podStartSLOduration=1.893165105 podStartE2EDuration="1.893165105s" podCreationTimestamp="2026-04-17 23:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:20.892811738 +0000 UTC m=+7.389898195" watchObservedRunningTime="2026-04-17 23:51:20.893165105 +0000 UTC m=+7.390251569" Apr 17 23:51:23.293949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730912578.mount: Deactivated successfully. Apr 17 23:51:25.593174 containerd[1503]: time="2026-04-17T23:51:25.591861850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:25.593775 containerd[1503]: time="2026-04-17T23:51:25.593250959Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:25.593775 containerd[1503]: time="2026-04-17T23:51:25.593326622Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:51:25.600066 containerd[1503]: time="2026-04-17T23:51:25.600017390Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:25.601472 containerd[1503]: time="2026-04-17T23:51:25.601427789Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.23543383s" Apr 17 23:51:25.601629 containerd[1503]: time="2026-04-17T23:51:25.601597314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:51:25.611207 containerd[1503]: time="2026-04-17T23:51:25.611165870Z" level=info msg="CreateContainer within sandbox \"622853fddf310f0f797fa6cee3b96aef1e232142c27f7cabda6f3a7dc41c4ad6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:51:25.627101 containerd[1503]: time="2026-04-17T23:51:25.627010591Z" level=info msg="CreateContainer within sandbox \"622853fddf310f0f797fa6cee3b96aef1e232142c27f7cabda6f3a7dc41c4ad6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6ae9c16feb58d0bf92e57b2a677158a367e55aff1707223dfcc7a1695609f617\"" Apr 17 23:51:25.629102 containerd[1503]: time="2026-04-17T23:51:25.628299515Z" level=info msg="StartContainer for \"6ae9c16feb58d0bf92e57b2a677158a367e55aff1707223dfcc7a1695609f617\"" Apr 17 23:51:25.670381 systemd[1]: Started cri-containerd-6ae9c16feb58d0bf92e57b2a677158a367e55aff1707223dfcc7a1695609f617.scope - libcontainer container 6ae9c16feb58d0bf92e57b2a677158a367e55aff1707223dfcc7a1695609f617. Apr 17 23:51:25.715222 containerd[1503]: time="2026-04-17T23:51:25.715119522Z" level=info msg="StartContainer for \"6ae9c16feb58d0bf92e57b2a677158a367e55aff1707223dfcc7a1695609f617\" returns successfully" Apr 17 23:51:27.929776 kubelet[2665]: I0417 23:51:27.929646 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-fz8tr" podStartSLOduration=3.691262498 podStartE2EDuration="8.929585291s" podCreationTimestamp="2026-04-17 23:51:19 +0000 UTC" firstStartedPulling="2026-04-17 23:51:20.365460097 +0000 UTC m=+6.862546534" lastFinishedPulling="2026-04-17 23:51:25.603782891 +0000 UTC m=+12.100869327" observedRunningTime="2026-04-17 23:51:25.925698493 +0000 UTC m=+12.422784939" watchObservedRunningTime="2026-04-17 23:51:27.929585291 +0000 UTC m=+14.426671745" Apr 17 23:51:33.381207 sudo[1759]: pam_unix(sudo:session): session closed for user root Apr 17 23:51:33.403394 sshd[1756]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:33.415695 systemd[1]: sshd@8-10.244.23.222:22-4.175.71.9:48136.service: Deactivated successfully. Apr 17 23:51:33.419102 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:51:33.419645 systemd[1]: session-11.scope: Consumed 5.040s CPU time, 160.3M memory peak, 0B memory swap peak. Apr 17 23:51:33.421177 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:51:33.423526 systemd-logind[1486]: Removed session 11. Apr 17 23:51:37.518965 systemd[1]: Created slice kubepods-besteffort-pod55f819af_98d4_42be_9eb3_fecdc795e103.slice - libcontainer container kubepods-besteffort-pod55f819af_98d4_42be_9eb3_fecdc795e103.slice. Apr 17 23:51:37.648169 kubelet[2665]: I0417 23:51:37.646812 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-sys-fs\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648169 kubelet[2665]: I0417 23:51:37.646874 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f819af-98d4-42be-9eb3-fecdc795e103-tigera-ca-bundle\") pod \"calico-typha-fc6976b48-lv9dc\" (UID: \"55f819af-98d4-42be-9eb3-fecdc795e103\") " pod="calico-system/calico-typha-fc6976b48-lv9dc" Apr 17 23:51:37.648169 kubelet[2665]: I0417 23:51:37.647041 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-bpffs\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648169 kubelet[2665]: I0417 23:51:37.647126 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-nodeproc\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648169 kubelet[2665]: I0417 23:51:37.647246 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6z9\" (UniqueName: \"kubernetes.io/projected/55f819af-98d4-42be-9eb3-fecdc795e103-kube-api-access-zw6z9\") pod \"calico-typha-fc6976b48-lv9dc\" (UID: \"55f819af-98d4-42be-9eb3-fecdc795e103\") " pod="calico-system/calico-typha-fc6976b48-lv9dc" Apr 17 23:51:37.648898 kubelet[2665]: I0417 23:51:37.647291 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-cni-log-dir\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648898 kubelet[2665]: I0417 23:51:37.647318 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-cni-net-dir\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648898 kubelet[2665]: I0417 23:51:37.647342 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-var-lib-calico\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648898 kubelet[2665]: I0417 23:51:37.647376 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-flexvol-driver-host\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.648898 kubelet[2665]: I0417 23:51:37.647402 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-policysync\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649337 kubelet[2665]: I0417 23:51:37.647429 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-xtables-lock\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649337 kubelet[2665]: I0417 23:51:37.647459 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-lib-modules\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649337 kubelet[2665]: I0417 23:51:37.647484 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d20064da-00b7-4bbc-902a-d9d43a8e915b-node-certs\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649337 kubelet[2665]: I0417 23:51:37.647509 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d20064da-00b7-4bbc-902a-d9d43a8e915b-tigera-ca-bundle\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649337 kubelet[2665]: I0417 23:51:37.647537 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55f819af-98d4-42be-9eb3-fecdc795e103-typha-certs\") pod \"calico-typha-fc6976b48-lv9dc\" (UID: \"55f819af-98d4-42be-9eb3-fecdc795e103\") " pod="calico-system/calico-typha-fc6976b48-lv9dc" Apr 17 23:51:37.649580 kubelet[2665]: I0417 23:51:37.647564 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-cni-bin-dir\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649580 kubelet[2665]: I0417 23:51:37.647592 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d20064da-00b7-4bbc-902a-d9d43a8e915b-var-run-calico\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.649580 kubelet[2665]: I0417 23:51:37.647620 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rsh\" (UniqueName: \"kubernetes.io/projected/d20064da-00b7-4bbc-902a-d9d43a8e915b-kube-api-access-56rsh\") pod \"calico-node-vpmjp\" (UID: \"d20064da-00b7-4bbc-902a-d9d43a8e915b\") " pod="calico-system/calico-node-vpmjp" Apr 17 23:51:37.656786 systemd[1]: Created slice kubepods-besteffort-podd20064da_00b7_4bbc_902a_d9d43a8e915b.slice - libcontainer container kubepods-besteffort-podd20064da_00b7_4bbc_902a_d9d43a8e915b.slice. Apr 17 23:51:37.775534 kubelet[2665]: E0417 23:51:37.775395 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.778243 kubelet[2665]: W0417 23:51:37.776081 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.778243 kubelet[2665]: E0417 23:51:37.776168 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.780600 kubelet[2665]: E0417 23:51:37.779923 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.781154 kubelet[2665]: W0417 23:51:37.781111 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.782439 kubelet[2665]: E0417 23:51:37.782336 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.808250 kubelet[2665]: E0417 23:51:37.808210 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.808594 kubelet[2665]: W0417 23:51:37.808478 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.808594 kubelet[2665]: E0417 23:51:37.808521 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.835198 kubelet[2665]: E0417 23:51:37.832558 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:37.839269 kubelet[2665]: E0417 23:51:37.839242 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.839662 kubelet[2665]: W0417 23:51:37.839383 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.839821 kubelet[2665]: E0417 23:51:37.839797 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.852390 kubelet[2665]: E0417 23:51:37.852162 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.852390 kubelet[2665]: W0417 23:51:37.852196 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.852390 kubelet[2665]: E0417 23:51:37.852231 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.852738 kubelet[2665]: E0417 23:51:37.852716 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.852857 kubelet[2665]: W0417 23:51:37.852835 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.852965 kubelet[2665]: E0417 23:51:37.852945 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.854173 kubelet[2665]: E0417 23:51:37.853400 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.854173 kubelet[2665]: W0417 23:51:37.853419 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.854173 kubelet[2665]: E0417 23:51:37.853444 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.854455 kubelet[2665]: E0417 23:51:37.854435 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.854555 kubelet[2665]: W0417 23:51:37.854534 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.854665 kubelet[2665]: E0417 23:51:37.854644 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.855441 kubelet[2665]: E0417 23:51:37.855419 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.855836 kubelet[2665]: W0417 23:51:37.855566 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.855836 kubelet[2665]: E0417 23:51:37.855592 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.856255 kubelet[2665]: E0417 23:51:37.856035 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.856386 kubelet[2665]: W0417 23:51:37.856363 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.856615 kubelet[2665]: E0417 23:51:37.856468 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.857245 kubelet[2665]: E0417 23:51:37.857076 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.857245 kubelet[2665]: W0417 23:51:37.857095 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.857245 kubelet[2665]: E0417 23:51:37.857113 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.858173 kubelet[2665]: E0417 23:51:37.857746 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.858173 kubelet[2665]: W0417 23:51:37.857766 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.858173 kubelet[2665]: E0417 23:51:37.857782 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.858427 kubelet[2665]: E0417 23:51:37.858408 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.858550 kubelet[2665]: W0417 23:51:37.858502 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.858863 kubelet[2665]: E0417 23:51:37.858824 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.860683 kubelet[2665]: E0417 23:51:37.860511 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.860683 kubelet[2665]: W0417 23:51:37.860529 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.860683 kubelet[2665]: E0417 23:51:37.860545 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.860997 kubelet[2665]: E0417 23:51:37.860975 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.861127 kubelet[2665]: W0417 23:51:37.861106 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.861370 kubelet[2665]: E0417 23:51:37.861239 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.861537 kubelet[2665]: E0417 23:51:37.861520 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.861648 kubelet[2665]: W0417 23:51:37.861627 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.861809 kubelet[2665]: E0417 23:51:37.861717 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.862193 kubelet[2665]: E0417 23:51:37.862174 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.862508 kubelet[2665]: W0417 23:51:37.862364 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.862508 kubelet[2665]: E0417 23:51:37.862391 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.863025 kubelet[2665]: E0417 23:51:37.862994 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.863459 kubelet[2665]: W0417 23:51:37.863329 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.863459 kubelet[2665]: E0417 23:51:37.863357 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.864870 kubelet[2665]: E0417 23:51:37.864849 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.865121 kubelet[2665]: W0417 23:51:37.864960 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.865121 kubelet[2665]: E0417 23:51:37.864986 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.865389 kubelet[2665]: E0417 23:51:37.865371 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.865509 kubelet[2665]: W0417 23:51:37.865488 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.865697 kubelet[2665]: E0417 23:51:37.865597 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.866198 kubelet[2665]: E0417 23:51:37.866178 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.866699 kubelet[2665]: W0417 23:51:37.866314 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.866699 kubelet[2665]: E0417 23:51:37.866341 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.868471 kubelet[2665]: E0417 23:51:37.868451 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.868842 kubelet[2665]: W0417 23:51:37.868671 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.868842 kubelet[2665]: E0417 23:51:37.868698 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.869343 kubelet[2665]: E0417 23:51:37.869324 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.869716 kubelet[2665]: W0417 23:51:37.869694 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.871720 kubelet[2665]: E0417 23:51:37.871575 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.871903 kubelet[2665]: E0417 23:51:37.871883 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.872026 kubelet[2665]: W0417 23:51:37.871990 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.873060 kubelet[2665]: E0417 23:51:37.872811 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.873285 kubelet[2665]: E0417 23:51:37.873252 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.873568 kubelet[2665]: W0417 23:51:37.873378 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.873568 kubelet[2665]: E0417 23:51:37.873402 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.873568 kubelet[2665]: I0417 23:51:37.873444 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fa2cb226-7c96-408f-a84d-5eaad54bb710-varrun\") pod \"csi-node-driver-wcb7j\" (UID: \"fa2cb226-7c96-408f-a84d-5eaad54bb710\") " pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:51:37.874305 kubelet[2665]: E0417 23:51:37.874035 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.874305 kubelet[2665]: W0417 23:51:37.874205 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.874305 kubelet[2665]: E0417 23:51:37.874226 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.876387 kubelet[2665]: I0417 23:51:37.876194 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa2cb226-7c96-408f-a84d-5eaad54bb710-socket-dir\") pod \"csi-node-driver-wcb7j\" (UID: \"fa2cb226-7c96-408f-a84d-5eaad54bb710\") " pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:51:37.876743 kubelet[2665]: E0417 23:51:37.876605 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.876743 kubelet[2665]: W0417 23:51:37.876621 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.876743 kubelet[2665]: E0417 23:51:37.876648 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.877350 kubelet[2665]: E0417 23:51:37.877215 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.877350 kubelet[2665]: W0417 23:51:37.877233 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.877350 kubelet[2665]: E0417 23:51:37.877250 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.877846 kubelet[2665]: E0417 23:51:37.877828 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.878048 kubelet[2665]: W0417 23:51:37.877961 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.878048 kubelet[2665]: E0417 23:51:37.877985 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.878378 kubelet[2665]: I0417 23:51:37.878165 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4pv\" (UniqueName: \"kubernetes.io/projected/fa2cb226-7c96-408f-a84d-5eaad54bb710-kube-api-access-cd4pv\") pod \"csi-node-driver-wcb7j\" (UID: \"fa2cb226-7c96-408f-a84d-5eaad54bb710\") " pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:51:37.880396 kubelet[2665]: E0417 23:51:37.880360 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.880633 kubelet[2665]: W0417 23:51:37.880402 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.880633 kubelet[2665]: E0417 23:51:37.880423 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.881267 kubelet[2665]: E0417 23:51:37.881236 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.881267 kubelet[2665]: W0417 23:51:37.881265 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.881455 kubelet[2665]: E0417 23:51:37.881283 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.881870 kubelet[2665]: E0417 23:51:37.881603 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.881870 kubelet[2665]: W0417 23:51:37.881622 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.881870 kubelet[2665]: E0417 23:51:37.881639 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.881870 kubelet[2665]: I0417 23:51:37.881798 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa2cb226-7c96-408f-a84d-5eaad54bb710-kubelet-dir\") pod \"csi-node-driver-wcb7j\" (UID: \"fa2cb226-7c96-408f-a84d-5eaad54bb710\") " pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:51:37.883602 kubelet[2665]: E0417 23:51:37.883540 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.883602 kubelet[2665]: W0417 23:51:37.883580 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.883602 kubelet[2665]: E0417 23:51:37.883597 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.883919 kubelet[2665]: E0417 23:51:37.883883 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.883919 kubelet[2665]: W0417 23:51:37.883919 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.884173 kubelet[2665]: E0417 23:51:37.883935 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.884428 kubelet[2665]: E0417 23:51:37.884404 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.884428 kubelet[2665]: W0417 23:51:37.884427 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.884694 kubelet[2665]: E0417 23:51:37.884446 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.884694 kubelet[2665]: I0417 23:51:37.884589 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa2cb226-7c96-408f-a84d-5eaad54bb710-registration-dir\") pod \"csi-node-driver-wcb7j\" (UID: \"fa2cb226-7c96-408f-a84d-5eaad54bb710\") " pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:51:37.885610 kubelet[2665]: E0417 23:51:37.885590 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.885610 kubelet[2665]: W0417 23:51:37.885609 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.885799 kubelet[2665]: E0417 23:51:37.885627 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.887173 kubelet[2665]: E0417 23:51:37.886317 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.887249 kubelet[2665]: W0417 23:51:37.887175 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.887249 kubelet[2665]: E0417 23:51:37.887197 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.887564 kubelet[2665]: E0417 23:51:37.887535 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.887564 kubelet[2665]: W0417 23:51:37.887561 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.887706 kubelet[2665]: E0417 23:51:37.887577 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.889047 kubelet[2665]: E0417 23:51:37.889022 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.889047 kubelet[2665]: W0417 23:51:37.889044 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.889194 kubelet[2665]: E0417 23:51:37.889061 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.967642 containerd[1503]: time="2026-04-17T23:51:37.967522103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpmjp,Uid:d20064da-00b7-4bbc-902a-d9d43a8e915b,Namespace:calico-system,Attempt:0,}" Apr 17 23:51:37.992959 kubelet[2665]: E0417 23:51:37.992903 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.992959 kubelet[2665]: W0417 23:51:37.992949 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.992959 kubelet[2665]: E0417 23:51:37.993013 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.994988 kubelet[2665]: E0417 23:51:37.994874 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.994988 kubelet[2665]: W0417 23:51:37.994911 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.994988 kubelet[2665]: E0417 23:51:37.994955 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:37.999439 kubelet[2665]: E0417 23:51:37.997849 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:37.999439 kubelet[2665]: W0417 23:51:37.997869 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:37.999439 kubelet[2665]: E0417 23:51:37.997888 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.000855 kubelet[2665]: E0417 23:51:38.000777 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.000855 kubelet[2665]: W0417 23:51:38.000797 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.000855 kubelet[2665]: E0417 23:51:38.000816 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.003207 kubelet[2665]: E0417 23:51:38.001794 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.003207 kubelet[2665]: W0417 23:51:38.001809 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.003207 kubelet[2665]: E0417 23:51:38.001825 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.003207 kubelet[2665]: E0417 23:51:38.003070 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.003207 kubelet[2665]: W0417 23:51:38.003084 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.003207 kubelet[2665]: E0417 23:51:38.003107 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.004306 kubelet[2665]: E0417 23:51:38.004211 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.004306 kubelet[2665]: W0417 23:51:38.004230 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.004306 kubelet[2665]: E0417 23:51:38.004248 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.007226 kubelet[2665]: E0417 23:51:38.007204 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.007435 kubelet[2665]: W0417 23:51:38.007320 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.007435 kubelet[2665]: E0417 23:51:38.007347 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.008444 kubelet[2665]: E0417 23:51:38.008290 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.008444 kubelet[2665]: W0417 23:51:38.008309 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.008444 kubelet[2665]: E0417 23:51:38.008326 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.009442 kubelet[2665]: E0417 23:51:38.009259 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.009442 kubelet[2665]: W0417 23:51:38.009279 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.009442 kubelet[2665]: E0417 23:51:38.009297 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.010696 kubelet[2665]: E0417 23:51:38.010548 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.010696 kubelet[2665]: W0417 23:51:38.010568 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.010696 kubelet[2665]: E0417 23:51:38.010585 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.011430 kubelet[2665]: E0417 23:51:38.011175 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.011430 kubelet[2665]: W0417 23:51:38.011193 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.011430 kubelet[2665]: E0417 23:51:38.011210 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.012349 kubelet[2665]: E0417 23:51:38.011662 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.012349 kubelet[2665]: W0417 23:51:38.011680 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.012349 kubelet[2665]: E0417 23:51:38.011696 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.012991 kubelet[2665]: E0417 23:51:38.012798 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.012991 kubelet[2665]: W0417 23:51:38.012817 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.012991 kubelet[2665]: E0417 23:51:38.012833 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.013448 kubelet[2665]: E0417 23:51:38.013393 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.013448 kubelet[2665]: W0417 23:51:38.013411 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.013448 kubelet[2665]: E0417 23:51:38.013427 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.014332 kubelet[2665]: E0417 23:51:38.014115 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.014332 kubelet[2665]: W0417 23:51:38.014162 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.014332 kubelet[2665]: E0417 23:51:38.014181 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.014936 kubelet[2665]: E0417 23:51:38.014792 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.014936 kubelet[2665]: W0417 23:51:38.014809 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.014936 kubelet[2665]: E0417 23:51:38.014838 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.015612 kubelet[2665]: E0417 23:51:38.015401 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.015612 kubelet[2665]: W0417 23:51:38.015418 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.015612 kubelet[2665]: E0417 23:51:38.015434 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.017163 kubelet[2665]: E0417 23:51:38.016124 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.017163 kubelet[2665]: W0417 23:51:38.016243 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.017163 kubelet[2665]: E0417 23:51:38.016264 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.017781 kubelet[2665]: E0417 23:51:38.017592 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.017781 kubelet[2665]: W0417 23:51:38.017610 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.017781 kubelet[2665]: E0417 23:51:38.017627 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.018628 kubelet[2665]: E0417 23:51:38.018595 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.018831 kubelet[2665]: W0417 23:51:38.018728 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.018831 kubelet[2665]: E0417 23:51:38.018753 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.019626 kubelet[2665]: E0417 23:51:38.019468 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.019626 kubelet[2665]: W0417 23:51:38.019489 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.019626 kubelet[2665]: E0417 23:51:38.019506 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.020353 kubelet[2665]: E0417 23:51:38.020195 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.020353 kubelet[2665]: W0417 23:51:38.020213 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.020353 kubelet[2665]: E0417 23:51:38.020229 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.020840 kubelet[2665]: E0417 23:51:38.020764 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.020840 kubelet[2665]: W0417 23:51:38.020783 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.020840 kubelet[2665]: E0417 23:51:38.020803 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.023899 kubelet[2665]: E0417 23:51:38.023873 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.023899 kubelet[2665]: W0417 23:51:38.023897 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.024194 kubelet[2665]: E0417 23:51:38.024166 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.045208 kubelet[2665]: E0417 23:51:38.043248 2665 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:38.045208 kubelet[2665]: W0417 23:51:38.043299 2665 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:38.045208 kubelet[2665]: E0417 23:51:38.043331 2665 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:38.067497 containerd[1503]: time="2026-04-17T23:51:38.066272866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:38.067497 containerd[1503]: time="2026-04-17T23:51:38.067215016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:38.067497 containerd[1503]: time="2026-04-17T23:51:38.067237540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:38.067497 containerd[1503]: time="2026-04-17T23:51:38.067396607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:38.103383 systemd[1]: Started cri-containerd-9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7.scope - libcontainer container 9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7. Apr 17 23:51:38.132087 containerd[1503]: time="2026-04-17T23:51:38.131456578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6976b48-lv9dc,Uid:55f819af-98d4-42be-9eb3-fecdc795e103,Namespace:calico-system,Attempt:0,}" Apr 17 23:51:38.162568 containerd[1503]: time="2026-04-17T23:51:38.162334445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpmjp,Uid:d20064da-00b7-4bbc-902a-d9d43a8e915b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\"" Apr 17 23:51:38.166618 containerd[1503]: time="2026-04-17T23:51:38.166273408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:51:38.189167 containerd[1503]: time="2026-04-17T23:51:38.188329145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:38.189167 containerd[1503]: time="2026-04-17T23:51:38.188501640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:38.189167 containerd[1503]: time="2026-04-17T23:51:38.188533563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:38.189167 containerd[1503]: time="2026-04-17T23:51:38.188679645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:38.231656 systemd[1]: Started cri-containerd-4df2001d3e5f4c7c7e72308fb11c1a68a8f6727e16a547447452c1fdd7736d89.scope - libcontainer container 4df2001d3e5f4c7c7e72308fb11c1a68a8f6727e16a547447452c1fdd7736d89. Apr 17 23:51:38.332618 containerd[1503]: time="2026-04-17T23:51:38.332355152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6976b48-lv9dc,Uid:55f819af-98d4-42be-9eb3-fecdc795e103,Namespace:calico-system,Attempt:0,} returns sandbox id \"4df2001d3e5f4c7c7e72308fb11c1a68a8f6727e16a547447452c1fdd7736d89\"" Apr 17 23:51:39.744982 kubelet[2665]: E0417 23:51:39.744908 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:39.778318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812860098.mount: Deactivated successfully. Apr 17 23:51:39.917247 containerd[1503]: time="2026-04-17T23:51:39.916273202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:39.918823 containerd[1503]: time="2026-04-17T23:51:39.918734264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 17 23:51:39.919996 containerd[1503]: time="2026-04-17T23:51:39.919900922Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:39.924050 containerd[1503]: time="2026-04-17T23:51:39.923542691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:39.925174 containerd[1503]: time="2026-04-17T23:51:39.924894377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.758529535s" Apr 17 23:51:39.925174 containerd[1503]: time="2026-04-17T23:51:39.924967565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:51:39.927813 containerd[1503]: time="2026-04-17T23:51:39.927697618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:51:39.950173 containerd[1503]: time="2026-04-17T23:51:39.950094082Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:51:39.971467 containerd[1503]: time="2026-04-17T23:51:39.971396689Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc\"" Apr 17 23:51:39.974765 containerd[1503]: time="2026-04-17T23:51:39.973017383Z" level=info msg="StartContainer for \"7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc\"" Apr 17 23:51:40.048264 systemd[1]: run-containerd-runc-k8s.io-7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc-runc.zpmVmI.mount: Deactivated successfully. Apr 17 23:51:40.061428 systemd[1]: Started cri-containerd-7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc.scope - libcontainer container 7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc. Apr 17 23:51:40.123837 containerd[1503]: time="2026-04-17T23:51:40.123749842Z" level=info msg="StartContainer for \"7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc\" returns successfully" Apr 17 23:51:40.151196 systemd[1]: cri-containerd-7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc.scope: Deactivated successfully. Apr 17 23:51:40.267103 containerd[1503]: time="2026-04-17T23:51:40.239670817Z" level=info msg="shim disconnected" id=7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc namespace=k8s.io Apr 17 23:51:40.267509 containerd[1503]: time="2026-04-17T23:51:40.267125987Z" level=warning msg="cleaning up after shim disconnected" id=7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc namespace=k8s.io Apr 17 23:51:40.267509 containerd[1503]: time="2026-04-17T23:51:40.267179116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:51:40.778396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b401d295a740e6e2076565db602ef521dad047fb9d9d94607dbbc4f1d3289cc-rootfs.mount: Deactivated successfully. Apr 17 23:51:41.745104 kubelet[2665]: E0417 23:51:41.744556 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:43.748422 kubelet[2665]: E0417 23:51:43.748359 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:44.223549 containerd[1503]: time="2026-04-17T23:51:44.223403061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:44.225220 containerd[1503]: time="2026-04-17T23:51:44.225105945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 17 23:51:44.226380 containerd[1503]: time="2026-04-17T23:51:44.226321711Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:44.231187 containerd[1503]: time="2026-04-17T23:51:44.230585656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:44.231792 containerd[1503]: time="2026-04-17T23:51:44.231751034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.304006775s" Apr 17 23:51:44.231880 containerd[1503]: time="2026-04-17T23:51:44.231803912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:51:44.235058 containerd[1503]: time="2026-04-17T23:51:44.234817005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:51:44.283067 containerd[1503]: time="2026-04-17T23:51:44.282916552Z" level=info msg="CreateContainer within sandbox \"4df2001d3e5f4c7c7e72308fb11c1a68a8f6727e16a547447452c1fdd7736d89\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:51:44.302199 containerd[1503]: time="2026-04-17T23:51:44.302007528Z" level=info msg="CreateContainer within sandbox \"4df2001d3e5f4c7c7e72308fb11c1a68a8f6727e16a547447452c1fdd7736d89\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49\"" Apr 17 23:51:44.304559 containerd[1503]: time="2026-04-17T23:51:44.304405600Z" level=info msg="StartContainer for \"e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49\"" Apr 17 23:51:44.360533 systemd[1]: Started cri-containerd-e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49.scope - libcontainer container e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49. Apr 17 23:51:44.436551 containerd[1503]: time="2026-04-17T23:51:44.436469832Z" level=info msg="StartContainer for \"e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49\" returns successfully" Apr 17 23:51:44.992887 kubelet[2665]: I0417 23:51:44.992799 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-fc6976b48-lv9dc" podStartSLOduration=2.097775375 podStartE2EDuration="7.992785613s" podCreationTimestamp="2026-04-17 23:51:37 +0000 UTC" firstStartedPulling="2026-04-17 23:51:38.3385326 +0000 UTC m=+24.835619034" lastFinishedPulling="2026-04-17 23:51:44.233542829 +0000 UTC m=+30.730629272" observedRunningTime="2026-04-17 23:51:44.992662219 +0000 UTC m=+31.489748676" watchObservedRunningTime="2026-04-17 23:51:44.992785613 +0000 UTC m=+31.489872053" Apr 17 23:51:45.255266 systemd[1]: run-containerd-runc-k8s.io-e5d8d3314331a2d4689f1b28a778932dce80fcdc6ac6be5ccd1068ae7d711f49-runc.eN2MlA.mount: Deactivated successfully. Apr 17 23:51:45.747266 kubelet[2665]: E0417 23:51:45.744857 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:45.989657 kubelet[2665]: I0417 23:51:45.989420 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:51:47.746928 kubelet[2665]: E0417 23:51:47.746158 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:49.744935 kubelet[2665]: E0417 23:51:49.744866 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:51.745109 kubelet[2665]: E0417 23:51:51.744441 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:53.747211 kubelet[2665]: E0417 23:51:53.747125 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:53.793073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027468510.mount: Deactivated successfully. Apr 17 23:51:53.846102 containerd[1503]: time="2026-04-17T23:51:53.843916556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:51:53.847372 containerd[1503]: time="2026-04-17T23:51:53.841020252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:53.849847 containerd[1503]: time="2026-04-17T23:51:53.849806361Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:53.851294 containerd[1503]: time="2026-04-17T23:51:53.851256207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.616249346s" Apr 17 23:51:53.851439 containerd[1503]: time="2026-04-17T23:51:53.851409409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:51:53.852091 containerd[1503]: time="2026-04-17T23:51:53.852053277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:53.869491 containerd[1503]: time="2026-04-17T23:51:53.869434915Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:51:53.954787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450989141.mount: Deactivated successfully. Apr 17 23:51:53.959824 containerd[1503]: time="2026-04-17T23:51:53.959748210Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7\"" Apr 17 23:51:53.960621 containerd[1503]: time="2026-04-17T23:51:53.960593228Z" level=info msg="StartContainer for \"d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7\"" Apr 17 23:51:54.072626 systemd[1]: Started cri-containerd-d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7.scope - libcontainer container d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7. Apr 17 23:51:54.129308 containerd[1503]: time="2026-04-17T23:51:54.129224362Z" level=info msg="StartContainer for \"d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7\" returns successfully" Apr 17 23:51:54.233318 systemd[1]: cri-containerd-d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7.scope: Deactivated successfully. Apr 17 23:51:54.381358 containerd[1503]: time="2026-04-17T23:51:54.381204506Z" level=info msg="shim disconnected" id=d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7 namespace=k8s.io Apr 17 23:51:54.381591 containerd[1503]: time="2026-04-17T23:51:54.381369300Z" level=warning msg="cleaning up after shim disconnected" id=d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7 namespace=k8s.io Apr 17 23:51:54.381591 containerd[1503]: time="2026-04-17T23:51:54.381388905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:51:54.793348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3004e708d9f4b185fa76d852aa658b3b2deaa0d01b58cc49b23e9ebba38c8c7-rootfs.mount: Deactivated successfully. Apr 17 23:51:55.046493 containerd[1503]: time="2026-04-17T23:51:55.043207870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:51:55.746192 kubelet[2665]: E0417 23:51:55.745477 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:57.746001 kubelet[2665]: E0417 23:51:57.745931 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:51:59.744832 kubelet[2665]: E0417 23:51:59.744762 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:52:00.168114 containerd[1503]: time="2026-04-17T23:52:00.168019826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.173290 containerd[1503]: time="2026-04-17T23:52:00.173240583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:52:00.174377 containerd[1503]: time="2026-04-17T23:52:00.174292576Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.179828 containerd[1503]: time="2026-04-17T23:52:00.179724645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.181850 containerd[1503]: time="2026-04-17T23:52:00.180889401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.137571321s" Apr 17 23:52:00.181850 containerd[1503]: time="2026-04-17T23:52:00.180939087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:52:00.188414 containerd[1503]: time="2026-04-17T23:52:00.188360758Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:52:00.229984 containerd[1503]: time="2026-04-17T23:52:00.229926724Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0\"" Apr 17 23:52:00.231284 containerd[1503]: time="2026-04-17T23:52:00.231232280Z" level=info msg="StartContainer for \"c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0\"" Apr 17 23:52:00.281538 systemd[1]: Started cri-containerd-c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0.scope - libcontainer container c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0. Apr 17 23:52:00.344484 containerd[1503]: time="2026-04-17T23:52:00.344412887Z" level=info msg="StartContainer for \"c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0\" returns successfully" Apr 17 23:52:01.433635 systemd[1]: cri-containerd-c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0.scope: Deactivated successfully. Apr 17 23:52:01.460778 kubelet[2665]: I0417 23:52:01.460368 2665 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:52:01.501432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0-rootfs.mount: Deactivated successfully. Apr 17 23:52:01.564557 containerd[1503]: time="2026-04-17T23:52:01.563897879Z" level=info msg="shim disconnected" id=c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0 namespace=k8s.io Apr 17 23:52:01.564557 containerd[1503]: time="2026-04-17T23:52:01.563976693Z" level=warning msg="cleaning up after shim disconnected" id=c20b8621792bc778a91685c2ae2a72cfc1a1accaa5478b38dae619ec9d2ea1a0 namespace=k8s.io Apr 17 23:52:01.564557 containerd[1503]: time="2026-04-17T23:52:01.564004397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:52:01.608186 containerd[1503]: time="2026-04-17T23:52:01.607193558Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:52:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:52:01.670247 systemd[1]: Created slice kubepods-burstable-podb54c39e6_5cc3_458a_a4e5_3a986945460e.slice - libcontainer container kubepods-burstable-podb54c39e6_5cc3_458a_a4e5_3a986945460e.slice. Apr 17 23:52:01.692380 systemd[1]: Created slice kubepods-besteffort-pod1254b0aa_66e4_4dd1_bb91_02f716eb390c.slice - libcontainer container kubepods-besteffort-pod1254b0aa_66e4_4dd1_bb91_02f716eb390c.slice. Apr 17 23:52:01.704787 kubelet[2665]: I0417 23:52:01.704234 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-ca-bundle\") pod \"whisker-6fdcf6b9f6-dqfx4\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:01.704787 kubelet[2665]: I0417 23:52:01.704313 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-nginx-config\") pod \"whisker-6fdcf6b9f6-dqfx4\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:01.704787 kubelet[2665]: I0417 23:52:01.704441 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b54c39e6-5cc3-458a-a4e5-3a986945460e-config-volume\") pod \"coredns-7d764666f9-wmjzl\" (UID: \"b54c39e6-5cc3-458a-a4e5-3a986945460e\") " pod="kube-system/coredns-7d764666f9-wmjzl" Apr 17 23:52:01.704787 kubelet[2665]: I0417 23:52:01.704498 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrdj\" (UniqueName: \"kubernetes.io/projected/b54c39e6-5cc3-458a-a4e5-3a986945460e-kube-api-access-8zrdj\") pod \"coredns-7d764666f9-wmjzl\" (UID: \"b54c39e6-5cc3-458a-a4e5-3a986945460e\") " pod="kube-system/coredns-7d764666f9-wmjzl" Apr 17 23:52:01.704787 kubelet[2665]: I0417 23:52:01.704541 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f508cf3-d08e-41ad-9a03-91094ecb3e48-config-volume\") pod \"coredns-7d764666f9-49g4d\" (UID: \"3f508cf3-d08e-41ad-9a03-91094ecb3e48\") " pod="kube-system/coredns-7d764666f9-49g4d" Apr 17 23:52:01.705770 kubelet[2665]: I0417 23:52:01.704579 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkr7\" (UniqueName: \"kubernetes.io/projected/1254b0aa-66e4-4dd1-bb91-02f716eb390c-kube-api-access-crkr7\") pod \"whisker-6fdcf6b9f6-dqfx4\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:01.705770 kubelet[2665]: I0417 23:52:01.704613 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-backend-key-pair\") pod \"whisker-6fdcf6b9f6-dqfx4\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:01.705770 kubelet[2665]: I0417 23:52:01.704644 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkk4\" (UniqueName: \"kubernetes.io/projected/3f508cf3-d08e-41ad-9a03-91094ecb3e48-kube-api-access-9vkk4\") pod \"coredns-7d764666f9-49g4d\" (UID: \"3f508cf3-d08e-41ad-9a03-91094ecb3e48\") " pod="kube-system/coredns-7d764666f9-49g4d" Apr 17 23:52:01.709385 systemd[1]: Created slice kubepods-burstable-pod3f508cf3_d08e_41ad_9a03_91094ecb3e48.slice - libcontainer container kubepods-burstable-pod3f508cf3_d08e_41ad_9a03_91094ecb3e48.slice. Apr 17 23:52:01.719918 systemd[1]: Created slice kubepods-besteffort-pod5c0d7228_c271_42fa_829b_e0c514b077bb.slice - libcontainer container kubepods-besteffort-pod5c0d7228_c271_42fa_829b_e0c514b077bb.slice. Apr 17 23:52:01.734340 systemd[1]: Created slice kubepods-besteffort-pod95dcdccd_c202_419d_8c14_1309ffe304e6.slice - libcontainer container kubepods-besteffort-pod95dcdccd_c202_419d_8c14_1309ffe304e6.slice. Apr 17 23:52:01.746241 systemd[1]: Created slice kubepods-besteffort-podce9100b6_6848_4690_a4ca_697384ee1ee9.slice - libcontainer container kubepods-besteffort-podce9100b6_6848_4690_a4ca_697384ee1ee9.slice. Apr 17 23:52:01.763220 systemd[1]: Created slice kubepods-besteffort-pod813387bb_b4f1_4fed_b573_d52cb41c7a62.slice - libcontainer container kubepods-besteffort-pod813387bb_b4f1_4fed_b573_d52cb41c7a62.slice. Apr 17 23:52:01.772869 systemd[1]: Created slice kubepods-besteffort-podfa2cb226_7c96_408f_a84d_5eaad54bb710.slice - libcontainer container kubepods-besteffort-podfa2cb226_7c96_408f_a84d_5eaad54bb710.slice. Apr 17 23:52:01.780575 containerd[1503]: time="2026-04-17T23:52:01.780008630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcb7j,Uid:fa2cb226-7c96-408f-a84d-5eaad54bb710,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.806498 kubelet[2665]: I0417 23:52:01.806231 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/813387bb-b4f1-4fed-b573-d52cb41c7a62-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-6lcgw\" (UID: \"813387bb-b4f1-4fed-b573-d52cb41c7a62\") " pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:01.806498 kubelet[2665]: I0417 23:52:01.806381 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dcdccd-c202-419d-8c14-1309ffe304e6-tigera-ca-bundle\") pod \"calico-kube-controllers-9b7569c47-gw5gc\" (UID: \"95dcdccd-c202-419d-8c14-1309ffe304e6\") " pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" Apr 17 23:52:01.806498 kubelet[2665]: I0417 23:52:01.806419 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rsx\" (UniqueName: \"kubernetes.io/projected/95dcdccd-c202-419d-8c14-1309ffe304e6-kube-api-access-66rsx\") pod \"calico-kube-controllers-9b7569c47-gw5gc\" (UID: \"95dcdccd-c202-419d-8c14-1309ffe304e6\") " pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" Apr 17 23:52:01.820180 kubelet[2665]: I0417 23:52:01.817563 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84c7\" (UniqueName: \"kubernetes.io/projected/ce9100b6-6848-4690-a4ca-697384ee1ee9-kube-api-access-c84c7\") pod \"calico-apiserver-6545b6d484-khkvj\" (UID: \"ce9100b6-6848-4690-a4ca-697384ee1ee9\") " pod="calico-system/calico-apiserver-6545b6d484-khkvj" Apr 17 23:52:01.825166 kubelet[2665]: I0417 23:52:01.822148 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c0d7228-c271-42fa-829b-e0c514b077bb-calico-apiserver-certs\") pod \"calico-apiserver-6545b6d484-99zm2\" (UID: \"5c0d7228-c271-42fa-829b-e0c514b077bb\") " pod="calico-system/calico-apiserver-6545b6d484-99zm2" Apr 17 23:52:01.825166 kubelet[2665]: I0417 23:52:01.822215 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce9100b6-6848-4690-a4ca-697384ee1ee9-calico-apiserver-certs\") pod \"calico-apiserver-6545b6d484-khkvj\" (UID: \"ce9100b6-6848-4690-a4ca-697384ee1ee9\") " pod="calico-system/calico-apiserver-6545b6d484-khkvj" Apr 17 23:52:01.825166 kubelet[2665]: I0417 23:52:01.825015 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/813387bb-b4f1-4fed-b573-d52cb41c7a62-goldmane-key-pair\") pod \"goldmane-9f7667bb8-6lcgw\" (UID: \"813387bb-b4f1-4fed-b573-d52cb41c7a62\") " pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:01.825166 kubelet[2665]: I0417 23:52:01.825053 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bxm\" (UniqueName: \"kubernetes.io/projected/813387bb-b4f1-4fed-b573-d52cb41c7a62-kube-api-access-s6bxm\") pod \"goldmane-9f7667bb8-6lcgw\" (UID: \"813387bb-b4f1-4fed-b573-d52cb41c7a62\") " pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:01.831166 kubelet[2665]: I0417 23:52:01.830081 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/813387bb-b4f1-4fed-b573-d52cb41c7a62-config\") pod \"goldmane-9f7667bb8-6lcgw\" (UID: \"813387bb-b4f1-4fed-b573-d52cb41c7a62\") " pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:01.831166 kubelet[2665]: I0417 23:52:01.830174 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjpzf\" (UniqueName: \"kubernetes.io/projected/5c0d7228-c271-42fa-829b-e0c514b077bb-kube-api-access-pjpzf\") pod \"calico-apiserver-6545b6d484-99zm2\" (UID: \"5c0d7228-c271-42fa-829b-e0c514b077bb\") " pod="calico-system/calico-apiserver-6545b6d484-99zm2" Apr 17 23:52:01.995788 containerd[1503]: time="2026-04-17T23:52:01.994819601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wmjzl,Uid:b54c39e6-5cc3-458a-a4e5-3a986945460e,Namespace:kube-system,Attempt:0,}" Apr 17 23:52:02.007998 containerd[1503]: time="2026-04-17T23:52:02.007490854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdcf6b9f6-dqfx4,Uid:1254b0aa-66e4-4dd1-bb91-02f716eb390c,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.019556 containerd[1503]: time="2026-04-17T23:52:02.018360044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-49g4d,Uid:3f508cf3-d08e-41ad-9a03-91094ecb3e48,Namespace:kube-system,Attempt:0,}" Apr 17 23:52:02.042882 containerd[1503]: time="2026-04-17T23:52:02.042818982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-99zm2,Uid:5c0d7228-c271-42fa-829b-e0c514b077bb,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.045733 containerd[1503]: time="2026-04-17T23:52:02.043458435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b7569c47-gw5gc,Uid:95dcdccd-c202-419d-8c14-1309ffe304e6,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.056279 containerd[1503]: time="2026-04-17T23:52:02.056190989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-khkvj,Uid:ce9100b6-6848-4690-a4ca-697384ee1ee9,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.078504 containerd[1503]: time="2026-04-17T23:52:02.078432562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6lcgw,Uid:813387bb-b4f1-4fed-b573-d52cb41c7a62,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.178925 containerd[1503]: time="2026-04-17T23:52:02.178194512Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:52:02.339129 containerd[1503]: time="2026-04-17T23:52:02.336844774Z" level=info msg="CreateContainer within sandbox \"9f209d1334fdefcac4a9a934232d063bbb1e8dd7e5141f82b4c2727861b35fc7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b2370a88e999a8391bda07b41c3b92257490730d2264a70a88a242813983284\"" Apr 17 23:52:02.344989 containerd[1503]: time="2026-04-17T23:52:02.344775656Z" level=info msg="StartContainer for \"5b2370a88e999a8391bda07b41c3b92257490730d2264a70a88a242813983284\"" Apr 17 23:52:02.591328 containerd[1503]: time="2026-04-17T23:52:02.590534217Z" level=error msg="Failed to destroy network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.599314 containerd[1503]: time="2026-04-17T23:52:02.594334023Z" level=error msg="Failed to destroy network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.594967 systemd[1]: Started cri-containerd-5b2370a88e999a8391bda07b41c3b92257490730d2264a70a88a242813983284.scope - libcontainer container 5b2370a88e999a8391bda07b41c3b92257490730d2264a70a88a242813983284. Apr 17 23:52:02.601959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034-shm.mount: Deactivated successfully. Apr 17 23:52:02.608369 containerd[1503]: time="2026-04-17T23:52:02.607845631Z" level=error msg="encountered an error cleaning up failed sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.608369 containerd[1503]: time="2026-04-17T23:52:02.608013482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wmjzl,Uid:b54c39e6-5cc3-458a-a4e5-3a986945460e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.608650 containerd[1503]: time="2026-04-17T23:52:02.608610165Z" level=error msg="Failed to destroy network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.610125 containerd[1503]: time="2026-04-17T23:52:02.609193086Z" level=error msg="encountered an error cleaning up failed sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.618166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb-shm.mount: Deactivated successfully. Apr 17 23:52:02.618349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11-shm.mount: Deactivated successfully. Apr 17 23:52:02.630584 containerd[1503]: time="2026-04-17T23:52:02.628632034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdcf6b9f6-dqfx4,Uid:1254b0aa-66e4-4dd1-bb91-02f716eb390c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.638362 containerd[1503]: time="2026-04-17T23:52:02.627982028Z" level=error msg="encountered an error cleaning up failed sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.638362 containerd[1503]: time="2026-04-17T23:52:02.632615705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcb7j,Uid:fa2cb226-7c96-408f-a84d-5eaad54bb710,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.638362 containerd[1503]: time="2026-04-17T23:52:02.632835624Z" level=error msg="Failed to destroy network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.640590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98-shm.mount: Deactivated successfully. Apr 17 23:52:02.649231 kubelet[2665]: E0417 23:52:02.649159 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.649808 kubelet[2665]: E0417 23:52:02.649310 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wmjzl" Apr 17 23:52:02.649808 kubelet[2665]: E0417 23:52:02.649356 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wmjzl" Apr 17 23:52:02.649808 kubelet[2665]: E0417 23:52:02.649476 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-wmjzl_kube-system(b54c39e6-5cc3-458a-a4e5-3a986945460e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-wmjzl_kube-system(b54c39e6-5cc3-458a-a4e5-3a986945460e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-wmjzl" podUID="b54c39e6-5cc3-458a-a4e5-3a986945460e" Apr 17 23:52:02.650705 kubelet[2665]: E0417 23:52:02.649983 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.650705 kubelet[2665]: E0417 23:52:02.650194 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.650705 kubelet[2665]: E0417 23:52:02.650383 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:02.650705 kubelet[2665]: E0417 23:52:02.650413 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fdcf6b9f6-dqfx4" Apr 17 23:52:02.651345 kubelet[2665]: E0417 23:52:02.650469 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fdcf6b9f6-dqfx4_calico-system(1254b0aa-66e4-4dd1-bb91-02f716eb390c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fdcf6b9f6-dqfx4_calico-system(1254b0aa-66e4-4dd1-bb91-02f716eb390c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fdcf6b9f6-dqfx4" podUID="1254b0aa-66e4-4dd1-bb91-02f716eb390c" Apr 17 23:52:02.653213 kubelet[2665]: E0417 23:52:02.652967 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:52:02.653213 kubelet[2665]: E0417 23:52:02.653010 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wcb7j" Apr 17 23:52:02.653213 kubelet[2665]: E0417 23:52:02.653096 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wcb7j_calico-system(fa2cb226-7c96-408f-a84d-5eaad54bb710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wcb7j_calico-system(fa2cb226-7c96-408f-a84d-5eaad54bb710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wcb7j" podUID="fa2cb226-7c96-408f-a84d-5eaad54bb710" Apr 17 23:52:02.654584 containerd[1503]: time="2026-04-17T23:52:02.654313709Z" level=error msg="encountered an error cleaning up failed sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.654584 containerd[1503]: time="2026-04-17T23:52:02.654428197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-49g4d,Uid:3f508cf3-d08e-41ad-9a03-91094ecb3e48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.655784 kubelet[2665]: E0417 23:52:02.654814 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.655784 kubelet[2665]: E0417 23:52:02.654865 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-49g4d" Apr 17 23:52:02.655784 kubelet[2665]: E0417 23:52:02.654923 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-49g4d" Apr 17 23:52:02.657355 kubelet[2665]: E0417 23:52:02.654973 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-49g4d_kube-system(3f508cf3-d08e-41ad-9a03-91094ecb3e48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-49g4d_kube-system(3f508cf3-d08e-41ad-9a03-91094ecb3e48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-49g4d" podUID="3f508cf3-d08e-41ad-9a03-91094ecb3e48" Apr 17 23:52:02.696518 containerd[1503]: time="2026-04-17T23:52:02.696312329Z" level=error msg="Failed to destroy network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.698404 containerd[1503]: time="2026-04-17T23:52:02.698223879Z" level=error msg="encountered an error cleaning up failed sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.698724 containerd[1503]: time="2026-04-17T23:52:02.698565976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6lcgw,Uid:813387bb-b4f1-4fed-b573-d52cb41c7a62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.699154 kubelet[2665]: E0417 23:52:02.699082 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.699773 kubelet[2665]: E0417 23:52:02.699430 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:02.699773 kubelet[2665]: E0417 23:52:02.699570 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6lcgw" Apr 17 23:52:02.700419 kubelet[2665]: E0417 23:52:02.699921 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-6lcgw_calico-system(813387bb-b4f1-4fed-b573-d52cb41c7a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-6lcgw_calico-system(813387bb-b4f1-4fed-b573-d52cb41c7a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-6lcgw" podUID="813387bb-b4f1-4fed-b573-d52cb41c7a62" Apr 17 23:52:02.722191 containerd[1503]: time="2026-04-17T23:52:02.721704580Z" level=error msg="Failed to destroy network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.723562 containerd[1503]: time="2026-04-17T23:52:02.723525952Z" level=error msg="encountered an error cleaning up failed sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.724303 containerd[1503]: time="2026-04-17T23:52:02.724108696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b7569c47-gw5gc,Uid:95dcdccd-c202-419d-8c14-1309ffe304e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.724733 kubelet[2665]: E0417 23:52:02.724614 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.725783 kubelet[2665]: E0417 23:52:02.724749 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" Apr 17 23:52:02.725783 kubelet[2665]: E0417 23:52:02.724783 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" Apr 17 23:52:02.725783 kubelet[2665]: E0417 23:52:02.724902 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b7569c47-gw5gc_calico-system(95dcdccd-c202-419d-8c14-1309ffe304e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b7569c47-gw5gc_calico-system(95dcdccd-c202-419d-8c14-1309ffe304e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" podUID="95dcdccd-c202-419d-8c14-1309ffe304e6" Apr 17 23:52:02.752671 containerd[1503]: time="2026-04-17T23:52:02.752592661Z" level=error msg="Failed to destroy network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.753277 containerd[1503]: time="2026-04-17T23:52:02.753099990Z" level=error msg="encountered an error cleaning up failed sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.753277 containerd[1503]: time="2026-04-17T23:52:02.753199892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-khkvj,Uid:ce9100b6-6848-4690-a4ca-697384ee1ee9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.754400 kubelet[2665]: E0417 23:52:02.753631 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.754400 kubelet[2665]: E0417 23:52:02.753715 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6545b6d484-khkvj" Apr 17 23:52:02.754400 kubelet[2665]: E0417 23:52:02.753746 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6545b6d484-khkvj" Apr 17 23:52:02.754584 kubelet[2665]: E0417 23:52:02.753847 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6545b6d484-khkvj_calico-system(ce9100b6-6848-4690-a4ca-697384ee1ee9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6545b6d484-khkvj_calico-system(ce9100b6-6848-4690-a4ca-697384ee1ee9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6545b6d484-khkvj" podUID="ce9100b6-6848-4690-a4ca-697384ee1ee9" Apr 17 23:52:02.771986 containerd[1503]: time="2026-04-17T23:52:02.771901070Z" level=error msg="Failed to destroy network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.772455 containerd[1503]: time="2026-04-17T23:52:02.772416644Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.772555 containerd[1503]: time="2026-04-17T23:52:02.772495121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-99zm2,Uid:5c0d7228-c271-42fa-829b-e0c514b077bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.773803 kubelet[2665]: E0417 23:52:02.773533 2665 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.773803 kubelet[2665]: E0417 23:52:02.773670 2665 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6545b6d484-99zm2" Apr 17 23:52:02.773803 kubelet[2665]: E0417 23:52:02.773710 2665 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6545b6d484-99zm2" Apr 17 23:52:02.774237 containerd[1503]: time="2026-04-17T23:52:02.773818270Z" level=info msg="StartContainer for \"5b2370a88e999a8391bda07b41c3b92257490730d2264a70a88a242813983284\" returns successfully" Apr 17 23:52:02.774628 kubelet[2665]: E0417 23:52:02.774429 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6545b6d484-99zm2_calico-system(5c0d7228-c271-42fa-829b-e0c514b077bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6545b6d484-99zm2_calico-system(5c0d7228-c271-42fa-829b-e0c514b077bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6545b6d484-99zm2" podUID="5c0d7228-c271-42fa-829b-e0c514b077bb" Apr 17 23:52:03.112693 kubelet[2665]: I0417 23:52:03.112312 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:03.114932 kubelet[2665]: I0417 23:52:03.114813 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:03.118484 kubelet[2665]: I0417 23:52:03.118228 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:03.120967 kubelet[2665]: I0417 23:52:03.120810 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:03.125113 kubelet[2665]: I0417 23:52:03.124520 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:03.199296 containerd[1503]: time="2026-04-17T23:52:03.198791496Z" level=info msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" Apr 17 23:52:03.209042 kubelet[2665]: I0417 23:52:03.208171 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-vpmjp" podStartSLOduration=2.243238873 podStartE2EDuration="26.20745299s" podCreationTimestamp="2026-04-17 23:51:37 +0000 UTC" firstStartedPulling="2026-04-17 23:51:38.165334035 +0000 UTC m=+24.662420466" lastFinishedPulling="2026-04-17 23:52:02.129548145 +0000 UTC m=+48.626634583" observedRunningTime="2026-04-17 23:52:03.202318251 +0000 UTC m=+49.699404701" watchObservedRunningTime="2026-04-17 23:52:03.20745299 +0000 UTC m=+49.704539436" Apr 17 23:52:03.209372 containerd[1503]: time="2026-04-17T23:52:03.208421355Z" level=info msg="Ensure that sandbox daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98 in task-service has been cleanup successfully" Apr 17 23:52:03.211797 containerd[1503]: time="2026-04-17T23:52:03.210761467Z" level=info msg="StopPodSandbox for \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\"" Apr 17 23:52:03.212941 containerd[1503]: time="2026-04-17T23:52:03.212906404Z" level=info msg="StopPodSandbox for \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\"" Apr 17 23:52:03.213990 containerd[1503]: time="2026-04-17T23:52:03.213929539Z" level=info msg="Ensure that sandbox 323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68 in task-service has been cleanup successfully" Apr 17 23:52:03.215894 containerd[1503]: time="2026-04-17T23:52:03.215308773Z" level=info msg="Ensure that sandbox 12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621 in task-service has been cleanup successfully" Apr 17 23:52:03.223167 containerd[1503]: time="2026-04-17T23:52:03.201382403Z" level=info msg="StopPodSandbox for \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\"" Apr 17 23:52:03.223167 containerd[1503]: time="2026-04-17T23:52:03.222571390Z" level=info msg="Ensure that sandbox de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034 in task-service has been cleanup successfully" Apr 17 23:52:03.228943 containerd[1503]: time="2026-04-17T23:52:03.201442003Z" level=info msg="StopPodSandbox for \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\"" Apr 17 23:52:03.237108 containerd[1503]: time="2026-04-17T23:52:03.234810923Z" level=info msg="Ensure that sandbox ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd in task-service has been cleanup successfully" Apr 17 23:52:03.282237 kubelet[2665]: I0417 23:52:03.282180 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:03.293695 containerd[1503]: time="2026-04-17T23:52:03.293627383Z" level=info msg="StopPodSandbox for \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\"" Apr 17 23:52:03.293941 containerd[1503]: time="2026-04-17T23:52:03.293905406Z" level=info msg="Ensure that sandbox 87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23 in task-service has been cleanup successfully" Apr 17 23:52:03.341034 kubelet[2665]: I0417 23:52:03.338396 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:03.347092 containerd[1503]: time="2026-04-17T23:52:03.346607824Z" level=info msg="StopPodSandbox for \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\"" Apr 17 23:52:03.351690 kubelet[2665]: I0417 23:52:03.351653 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:03.359607 containerd[1503]: time="2026-04-17T23:52:03.354288953Z" level=info msg="StopPodSandbox for \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\"" Apr 17 23:52:03.365891 containerd[1503]: time="2026-04-17T23:52:03.365747003Z" level=info msg="Ensure that sandbox 354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11 in task-service has been cleanup successfully" Apr 17 23:52:03.387047 containerd[1503]: time="2026-04-17T23:52:03.384957210Z" level=info msg="Ensure that sandbox 6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb in task-service has been cleanup successfully" Apr 17 23:52:03.494503 containerd[1503]: time="2026-04-17T23:52:03.494377098Z" level=error msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" failed" error="failed to destroy network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:03.504666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621-shm.mount: Deactivated successfully. Apr 17 23:52:03.505117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd-shm.mount: Deactivated successfully. Apr 17 23:52:03.505503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23-shm.mount: Deactivated successfully. Apr 17 23:52:03.505628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68-shm.mount: Deactivated successfully. Apr 17 23:52:03.515723 kubelet[2665]: E0417 23:52:03.495010 2665 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:03.523871 kubelet[2665]: E0417 23:52:03.523726 2665 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98"} Apr 17 23:52:03.524493 kubelet[2665]: E0417 23:52:03.524083 2665 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f508cf3-d08e-41ad-9a03-91094ecb3e48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:52:03.524493 kubelet[2665]: E0417 23:52:03.524173 2665 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f508cf3-d08e-41ad-9a03-91094ecb3e48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-49g4d" podUID="3f508cf3-d08e-41ad-9a03-91094ecb3e48" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.820 [INFO][3825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.821 [INFO][3825] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" iface="eth0" netns="/var/run/netns/cni-283d5cc1-44f0-7019-19e4-e6bba9bb23cb" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.821 [INFO][3825] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" iface="eth0" netns="/var/run/netns/cni-283d5cc1-44f0-7019-19e4-e6bba9bb23cb" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.822 [INFO][3825] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" iface="eth0" netns="/var/run/netns/cni-283d5cc1-44f0-7019-19e4-e6bba9bb23cb" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.822 [INFO][3825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:03.822 [INFO][3825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.168 [INFO][3925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.169 [INFO][3925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.170 [INFO][3925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.201 [WARNING][3925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.202 [INFO][3925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.213 [INFO][3925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.234174 containerd[1503]: 2026-04-17 23:52:04.221 [INFO][3825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:04.234174 containerd[1503]: time="2026-04-17T23:52:04.232296073Z" level=info msg="TearDown network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" successfully" Apr 17 23:52:04.234174 containerd[1503]: time="2026-04-17T23:52:04.232337569Z" level=info msg="StopPodSandbox for \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" returns successfully" Apr 17 23:52:04.237779 systemd[1]: run-netns-cni\x2d283d5cc1\x2d44f0\x2d7019\x2d19e4\x2de6bba9bb23cb.mount: Deactivated successfully. Apr 17 23:52:04.270498 containerd[1503]: time="2026-04-17T23:52:04.269016613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6lcgw,Uid:813387bb-b4f1-4fed-b573-d52cb41c7a62,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.797 [INFO][3860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.798 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" iface="eth0" netns="/var/run/netns/cni-52fee814-a93c-e762-0783-286f8392d0e1" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.799 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" iface="eth0" netns="/var/run/netns/cni-52fee814-a93c-e762-0783-286f8392d0e1" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.805 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" iface="eth0" netns="/var/run/netns/cni-52fee814-a93c-e762-0783-286f8392d0e1" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.805 [INFO][3860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:03.805 [INFO][3860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.169 [INFO][3923] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.169 [INFO][3923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.214 [INFO][3923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.245 [WARNING][3923] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.246 [INFO][3923] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.251 [INFO][3923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.274351 containerd[1503]: 2026-04-17 23:52:04.266 [INFO][3860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:04.276855 containerd[1503]: time="2026-04-17T23:52:04.276644422Z" level=info msg="TearDown network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" successfully" Apr 17 23:52:04.276855 containerd[1503]: time="2026-04-17T23:52:04.276688143Z" level=info msg="StopPodSandbox for \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" returns successfully" Apr 17 23:52:04.280988 systemd[1]: run-netns-cni\x2d52fee814\x2da93c\x2de762\x2d0783\x2d286f8392d0e1.mount: Deactivated successfully. Apr 17 23:52:04.288732 containerd[1503]: time="2026-04-17T23:52:04.288328926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b7569c47-gw5gc,Uid:95dcdccd-c202-419d-8c14-1309ffe304e6,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.729 [INFO][3815] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.730 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" iface="eth0" netns="/var/run/netns/cni-12b03d81-9607-dbe9-737d-421f72cc4320" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.731 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" iface="eth0" netns="/var/run/netns/cni-12b03d81-9607-dbe9-737d-421f72cc4320" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.738 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" iface="eth0" netns="/var/run/netns/cni-12b03d81-9607-dbe9-737d-421f72cc4320" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.738 [INFO][3815] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:03.738 [INFO][3815] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.170 [INFO][3905] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.171 [INFO][3905] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.252 [INFO][3905] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.281 [WARNING][3905] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.283 [INFO][3905] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.288 [INFO][3905] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.308021 containerd[1503]: 2026-04-17 23:52:04.294 [INFO][3815] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:04.313613 containerd[1503]: time="2026-04-17T23:52:04.313028562Z" level=info msg="TearDown network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" successfully" Apr 17 23:52:04.317192 containerd[1503]: time="2026-04-17T23:52:04.314427651Z" level=info msg="StopPodSandbox for \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" returns successfully" Apr 17 23:52:04.316412 systemd[1]: run-netns-cni\x2d12b03d81\x2d9607\x2ddbe9\x2d737d\x2d421f72cc4320.mount: Deactivated successfully. Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.731 [INFO][3866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.734 [INFO][3866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" iface="eth0" netns="/var/run/netns/cni-aa1adf9a-a85b-a6a2-351d-dc8ccce81082" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.734 [INFO][3866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" iface="eth0" netns="/var/run/netns/cni-aa1adf9a-a85b-a6a2-351d-dc8ccce81082" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.742 [INFO][3866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" iface="eth0" netns="/var/run/netns/cni-aa1adf9a-a85b-a6a2-351d-dc8ccce81082" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.742 [INFO][3866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:03.742 [INFO][3866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.173 [INFO][3907] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.175 [INFO][3907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.289 [INFO][3907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.325 [WARNING][3907] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.325 [INFO][3907] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.329 [INFO][3907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.364877 containerd[1503]: 2026-04-17 23:52:04.332 [INFO][3866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:04.367574 containerd[1503]: time="2026-04-17T23:52:04.367355424Z" level=info msg="TearDown network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" successfully" Apr 17 23:52:04.367574 containerd[1503]: time="2026-04-17T23:52:04.367404903Z" level=info msg="StopPodSandbox for \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" returns successfully" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.766 [INFO][3831] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.766 [INFO][3831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" iface="eth0" netns="/var/run/netns/cni-0d6802fe-e4cf-871f-2a9d-1dc49f721ab8" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.767 [INFO][3831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" iface="eth0" netns="/var/run/netns/cni-0d6802fe-e4cf-871f-2a9d-1dc49f721ab8" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.778 [INFO][3831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" iface="eth0" netns="/var/run/netns/cni-0d6802fe-e4cf-871f-2a9d-1dc49f721ab8" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.778 [INFO][3831] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:03.778 [INFO][3831] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.177 [INFO][3918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.178 [INFO][3918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.329 [INFO][3918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.349 [WARNING][3918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.349 [INFO][3918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.354 [INFO][3918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.367574 containerd[1503]: 2026-04-17 23:52:04.360 [INFO][3831] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:04.369603 containerd[1503]: time="2026-04-17T23:52:04.368180482Z" level=info msg="TearDown network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" successfully" Apr 17 23:52:04.369603 containerd[1503]: time="2026-04-17T23:52:04.368212188Z" level=info msg="StopPodSandbox for \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" returns successfully" Apr 17 23:52:04.377160 containerd[1503]: time="2026-04-17T23:52:04.376516596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wmjzl,Uid:b54c39e6-5cc3-458a-a4e5-3a986945460e,Namespace:kube-system,Attempt:1,}" Apr 17 23:52:04.386211 containerd[1503]: time="2026-04-17T23:52:04.386126447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-99zm2,Uid:5c0d7228-c271-42fa-829b-e0c514b077bb,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.403540 containerd[1503]: time="2026-04-17T23:52:04.403445691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcb7j,Uid:fa2cb226-7c96-408f-a84d-5eaad54bb710,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.883 [INFO][3883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.883 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" iface="eth0" netns="/var/run/netns/cni-b73ceddf-21e6-be88-ab36-5d2efa9da0a6" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.885 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" iface="eth0" netns="/var/run/netns/cni-b73ceddf-21e6-be88-ab36-5d2efa9da0a6" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.889 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" iface="eth0" netns="/var/run/netns/cni-b73ceddf-21e6-be88-ab36-5d2efa9da0a6" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.889 [INFO][3883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:03.890 [INFO][3883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.191 [INFO][3934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.193 [INFO][3934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.355 [INFO][3934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.382 [WARNING][3934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.382 [INFO][3934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.390 [INFO][3934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.451157 containerd[1503]: 2026-04-17 23:52:04.400 [INFO][3883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:04.455064 containerd[1503]: time="2026-04-17T23:52:04.454980927Z" level=info msg="TearDown network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" successfully" Apr 17 23:52:04.455320 containerd[1503]: time="2026-04-17T23:52:04.455289496Z" level=info msg="StopPodSandbox for \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" returns successfully" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.887 [INFO][3838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.887 [INFO][3838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" iface="eth0" netns="/var/run/netns/cni-57a89fd5-ce7d-df39-fe03-95d61b90bd99" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.888 [INFO][3838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" iface="eth0" netns="/var/run/netns/cni-57a89fd5-ce7d-df39-fe03-95d61b90bd99" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.895 [INFO][3838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" iface="eth0" netns="/var/run/netns/cni-57a89fd5-ce7d-df39-fe03-95d61b90bd99" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.895 [INFO][3838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:03.895 [INFO][3838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.197 [INFO][3938] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.198 [INFO][3938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.394 [INFO][3938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.439 [WARNING][3938] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.439 [INFO][3938] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.443 [INFO][3938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.464324 containerd[1503]: 2026-04-17 23:52:04.445 [INFO][3838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:04.468365 containerd[1503]: time="2026-04-17T23:52:04.466865789Z" level=info msg="TearDown network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" successfully" Apr 17 23:52:04.468365 containerd[1503]: time="2026-04-17T23:52:04.466932607Z" level=info msg="StopPodSandbox for \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" returns successfully" Apr 17 23:52:04.470683 containerd[1503]: time="2026-04-17T23:52:04.470534118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdcf6b9f6-dqfx4,Uid:1254b0aa-66e4-4dd1-bb91-02f716eb390c,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.472153 containerd[1503]: time="2026-04-17T23:52:04.471364626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-khkvj,Uid:ce9100b6-6848-4690-a4ca-697384ee1ee9,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:04.507922 systemd[1]: run-netns-cni\x2d57a89fd5\x2dce7d\x2ddf39\x2dfe03\x2d95d61b90bd99.mount: Deactivated successfully. Apr 17 23:52:04.509695 systemd[1]: run-netns-cni\x2db73ceddf\x2d21e6\x2dbe88\x2dab36\x2d5d2efa9da0a6.mount: Deactivated successfully. Apr 17 23:52:04.509824 systemd[1]: run-netns-cni\x2d0d6802fe\x2de4cf\x2d871f\x2d2a9d\x2d1dc49f721ab8.mount: Deactivated successfully. Apr 17 23:52:04.509937 systemd[1]: run-netns-cni\x2daa1adf9a\x2da85b\x2da6a2\x2d351d\x2ddc8ccce81082.mount: Deactivated successfully. Apr 17 23:52:04.853159 kubelet[2665]: I0417 23:52:04.852786 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:52:05.200263 systemd-networkd[1433]: calibd1ec549dc0: Link UP Apr 17 23:52:05.202652 systemd-networkd[1433]: calibd1ec549dc0: Gained carrier Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:04.618 [ERROR][3967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:04.707 [INFO][3967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0 goldmane-9f7667bb8- calico-system 813387bb-b4f1-4fed-b573-d52cb41c7a62 933 0 2026-04-17 23:51:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com goldmane-9f7667bb8-6lcgw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibd1ec549dc0 [] [] }} ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:04.707 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:04.968 [INFO][4075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" HandleID="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.029 [INFO][4075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" HandleID="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"goldmane-9f7667bb8-6lcgw", "timestamp":"2026-04-17 23:52:04.96805916 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000496000)} Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.029 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.029 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.029 [INFO][4075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.041 [INFO][4075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.053 [INFO][4075] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.064 [INFO][4075] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.070 [INFO][4075] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.079 [INFO][4075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.079 [INFO][4075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.090 [INFO][4075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15 Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.098 [INFO][4075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.120 [INFO][4075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.1/26] block=192.168.91.0/26 handle="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.121 [INFO][4075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.1/26] handle="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.121 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:05.307368 containerd[1503]: 2026-04-17 23:52:05.121 [INFO][4075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.1/26] IPv6=[] ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" HandleID="k8s-pod-network.76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.133 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"813387bb-b4f1-4fed-b573-d52cb41c7a62", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-9f7667bb8-6lcgw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd1ec549dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.133 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.1/32] ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.133 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd1ec549dc0 ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.204 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.206 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"813387bb-b4f1-4fed-b573-d52cb41c7a62", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15", Pod:"goldmane-9f7667bb8-6lcgw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd1ec549dc0", MAC:"c2:94:58:f2:5a:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.311399 containerd[1503]: 2026-04-17 23:52:05.295 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15" Namespace="calico-system" Pod="goldmane-9f7667bb8-6lcgw" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:05.334248 systemd-networkd[1433]: cali6829befad52: Link UP Apr 17 23:52:05.334642 systemd-networkd[1433]: cali6829befad52: Gained carrier Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:04.705 [ERROR][3977] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:04.772 [INFO][3977] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0 calico-kube-controllers-9b7569c47- calico-system 95dcdccd-c202-419d-8c14-1309ffe304e6 932 0 2026-04-17 23:51:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9b7569c47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com calico-kube-controllers-9b7569c47-gw5gc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6829befad52 [] [] }} ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:04.773 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.069 [INFO][4081] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" HandleID="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.113 [INFO][4081] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" HandleID="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"calico-kube-controllers-9b7569c47-gw5gc", "timestamp":"2026-04-17 23:52:05.069435154 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022a840)} Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.113 [INFO][4081] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.124 [INFO][4081] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.124 [INFO][4081] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.143 [INFO][4081] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.199 [INFO][4081] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.221 [INFO][4081] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.234 [INFO][4081] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.256 [INFO][4081] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.257 [INFO][4081] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.281 [INFO][4081] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141 Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.296 [INFO][4081] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.311 [INFO][4081] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.2/26] block=192.168.91.0/26 handle="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.311 [INFO][4081] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.2/26] handle="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.311 [INFO][4081] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:05.376177 containerd[1503]: 2026-04-17 23:52:05.311 [INFO][4081] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.2/26] IPv6=[] ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" HandleID="k8s-pod-network.530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.323 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0", GenerateName:"calico-kube-controllers-9b7569c47-", Namespace:"calico-system", SelfLink:"", UID:"95dcdccd-c202-419d-8c14-1309ffe304e6", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b7569c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-9b7569c47-gw5gc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6829befad52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.323 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.2/32] ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.323 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6829befad52 ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.335 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.336 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0", GenerateName:"calico-kube-controllers-9b7569c47-", Namespace:"calico-system", SelfLink:"", UID:"95dcdccd-c202-419d-8c14-1309ffe304e6", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b7569c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141", Pod:"calico-kube-controllers-9b7569c47-gw5gc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6829befad52", MAC:"4e:1d:84:9a:d1:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.377673 containerd[1503]: 2026-04-17 23:52:05.363 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141" Namespace="calico-system" Pod="calico-kube-controllers-9b7569c47-gw5gc" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:05.468740 systemd-networkd[1433]: calif6cb244a7c4: Link UP Apr 17 23:52:05.474751 systemd-networkd[1433]: calif6cb244a7c4: Gained carrier Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:04.877 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:04.914 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0 csi-node-driver- calico-system fa2cb226-7c96-408f-a84d-5eaad54bb710 930 0 2026-04-17 23:51:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com csi-node-driver-wcb7j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6cb244a7c4 [] [] }} ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:04.915 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.102 [INFO][4101] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" HandleID="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.155 [INFO][4101] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" HandleID="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123b00), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"csi-node-driver-wcb7j", "timestamp":"2026-04-17 23:52:05.10277681 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00051d600)} Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.155 [INFO][4101] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.312 [INFO][4101] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.312 [INFO][4101] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.325 [INFO][4101] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.353 [INFO][4101] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.378 [INFO][4101] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.387 [INFO][4101] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.396 [INFO][4101] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.397 [INFO][4101] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.405 [INFO][4101] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.419 [INFO][4101] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.444 [INFO][4101] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.3/26] block=192.168.91.0/26 handle="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.444 [INFO][4101] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.3/26] handle="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.444 [INFO][4101] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:05.549363 containerd[1503]: 2026-04-17 23:52:05.444 [INFO][4101] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.3/26] IPv6=[] ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" HandleID="k8s-pod-network.60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.450 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa2cb226-7c96-408f-a84d-5eaad54bb710", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-wcb7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6cb244a7c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.455 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.3/32] ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.455 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6cb244a7c4 ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.473 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.480 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa2cb226-7c96-408f-a84d-5eaad54bb710", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f", Pod:"csi-node-driver-wcb7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6cb244a7c4", MAC:"e2:fd:98:4c:48:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.552608 containerd[1503]: 2026-04-17 23:52:05.533 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f" Namespace="calico-system" Pod="csi-node-driver-wcb7j" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:05.583719 containerd[1503]: time="2026-04-17T23:52:05.577816500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:05.583719 containerd[1503]: time="2026-04-17T23:52:05.577953301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:05.583719 containerd[1503]: time="2026-04-17T23:52:05.578005228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.584338 containerd[1503]: time="2026-04-17T23:52:05.584219220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.617918 containerd[1503]: time="2026-04-17T23:52:05.616813039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:05.617918 containerd[1503]: time="2026-04-17T23:52:05.616934651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:05.617918 containerd[1503]: time="2026-04-17T23:52:05.616960416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.617918 containerd[1503]: time="2026-04-17T23:52:05.617114899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.632321 containerd[1503]: time="2026-04-17T23:52:05.631170315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:05.632321 containerd[1503]: time="2026-04-17T23:52:05.631285684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:05.632321 containerd[1503]: time="2026-04-17T23:52:05.631310105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.639445 containerd[1503]: time="2026-04-17T23:52:05.631445905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.696853 systemd-networkd[1433]: cali95df4c8eff8: Link UP Apr 17 23:52:05.699478 systemd-networkd[1433]: cali95df4c8eff8: Gained carrier Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:04.811 [ERROR][4040] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:04.861 [INFO][4040] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0 calico-apiserver-6545b6d484- calico-system ce9100b6-6848-4690-a4ca-697384ee1ee9 935 0 2026-04-17 23:51:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6545b6d484 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com calico-apiserver-6545b6d484-khkvj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali95df4c8eff8 [] [] }} ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:04.861 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.228 [INFO][4096] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" HandleID="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.265 [INFO][4096] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" HandleID="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003746c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"calico-apiserver-6545b6d484-khkvj", "timestamp":"2026-04-17 23:52:05.228800699 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005602c0)} Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.265 [INFO][4096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.447 [INFO][4096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.448 [INFO][4096] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.452 [INFO][4096] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.490 [INFO][4096] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.533 [INFO][4096] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.556 [INFO][4096] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.566 [INFO][4096] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.566 [INFO][4096] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.569 [INFO][4096] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60 Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.587 [INFO][4096] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.607 [INFO][4096] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.4/26] block=192.168.91.0/26 handle="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.607 [INFO][4096] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.4/26] handle="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.610 [INFO][4096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:05.785179 containerd[1503]: 2026-04-17 23:52:05.610 [INFO][4096] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.4/26] IPv6=[] ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" HandleID="k8s-pod-network.8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.665 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"ce9100b6-6848-4690-a4ca-697384ee1ee9", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6545b6d484-khkvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali95df4c8eff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.665 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.4/32] ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.665 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95df4c8eff8 ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.711 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.724 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"ce9100b6-6848-4690-a4ca-697384ee1ee9", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60", Pod:"calico-apiserver-6545b6d484-khkvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali95df4c8eff8", MAC:"ba:73:c8:7e:70:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.790693 containerd[1503]: 2026-04-17 23:52:05.755 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-khkvj" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:05.801478 systemd[1]: Started cri-containerd-530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141.scope - libcontainer container 530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141. Apr 17 23:52:05.806949 systemd[1]: Started cri-containerd-60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f.scope - libcontainer container 60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f. Apr 17 23:52:05.843052 systemd-networkd[1433]: cali8df6116539b: Link UP Apr 17 23:52:05.855095 systemd-networkd[1433]: cali8df6116539b: Gained carrier Apr 17 23:52:05.859399 systemd[1]: Started cri-containerd-76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15.scope - libcontainer container 76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15. Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:04.862 [ERROR][3997] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.001 [INFO][3997] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0 coredns-7d764666f9- kube-system b54c39e6-5cc3-458a-a4e5-3a986945460e 931 0 2026-04-17 23:51:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com coredns-7d764666f9-wmjzl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8df6116539b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.001 [INFO][3997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.303 [INFO][4113] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" HandleID="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.330 [INFO][4113] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" HandleID="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"coredns-7d764666f9-wmjzl", "timestamp":"2026-04-17 23:52:05.303237173 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00056adc0)} Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.330 [INFO][4113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.613 [INFO][4113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.613 [INFO][4113] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.627 [INFO][4113] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.656 [INFO][4113] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.702 [INFO][4113] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.708 [INFO][4113] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.726 [INFO][4113] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.727 [INFO][4113] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.733 [INFO][4113] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.749 [INFO][4113] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.790 [INFO][4113] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.5/26] block=192.168.91.0/26 handle="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.790 [INFO][4113] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.5/26] handle="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.791 [INFO][4113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:05.899167 containerd[1503]: 2026-04-17 23:52:05.791 [INFO][4113] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.5/26] IPv6=[] ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" HandleID="k8s-pod-network.6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.902834 containerd[1503]: 2026-04-17 23:52:05.811 [INFO][3997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b54c39e6-5cc3-458a-a4e5-3a986945460e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-wmjzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8df6116539b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.902834 containerd[1503]: 2026-04-17 23:52:05.813 [INFO][3997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.5/32] ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.902834 containerd[1503]: 2026-04-17 23:52:05.814 [INFO][3997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8df6116539b ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.902834 containerd[1503]: 2026-04-17 23:52:05.863 [INFO][3997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.902834 containerd[1503]: 2026-04-17 23:52:05.864 [INFO][3997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b54c39e6-5cc3-458a-a4e5-3a986945460e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae", Pod:"coredns-7d764666f9-wmjzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8df6116539b", MAC:"c2:da:67:40:94:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:05.903988 containerd[1503]: 2026-04-17 23:52:05.890 [INFO][3997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae" Namespace="kube-system" Pod="coredns-7d764666f9-wmjzl" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:05.936363 containerd[1503]: time="2026-04-17T23:52:05.933980003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:05.936363 containerd[1503]: time="2026-04-17T23:52:05.935179154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:05.936363 containerd[1503]: time="2026-04-17T23:52:05.935251722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.936363 containerd[1503]: time="2026-04-17T23:52:05.935425092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:05.967696 systemd-networkd[1433]: cali023644f3bfc: Link UP Apr 17 23:52:05.969587 systemd-networkd[1433]: cali023644f3bfc: Gained carrier Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:04.923 [ERROR][4019] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.057 [INFO][4019] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0 calico-apiserver-6545b6d484- calico-system 5c0d7228-c271-42fa-829b-e0c514b077bb 929 0 2026-04-17 23:51:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6545b6d484 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com calico-apiserver-6545b6d484-99zm2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali023644f3bfc [] [] }} ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.057 [INFO][4019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.356 [INFO][4125] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" HandleID="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.380 [INFO][4125] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" HandleID="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e7240), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"calico-apiserver-6545b6d484-99zm2", "timestamp":"2026-04-17 23:52:05.356416297 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.380 [INFO][4125] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.794 [INFO][4125] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.794 [INFO][4125] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.810 [INFO][4125] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.854 [INFO][4125] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.875 [INFO][4125] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.879 [INFO][4125] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.887 [INFO][4125] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.887 [INFO][4125] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.896 [INFO][4125] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826 Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.917 [INFO][4125] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.937 [INFO][4125] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.6/26] block=192.168.91.0/26 handle="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.938 [INFO][4125] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.6/26] handle="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.938 [INFO][4125] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:06.038954 containerd[1503]: 2026-04-17 23:52:05.938 [INFO][4125] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.6/26] IPv6=[] ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" HandleID="k8s-pod-network.68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:05.951 [INFO][4019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"5c0d7228-c271-42fa-829b-e0c514b077bb", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6545b6d484-99zm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali023644f3bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:05.951 [INFO][4019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.6/32] ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:05.951 [INFO][4019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali023644f3bfc ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:05.976 [INFO][4019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:05.984 [INFO][4019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"5c0d7228-c271-42fa-829b-e0c514b077bb", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826", Pod:"calico-apiserver-6545b6d484-99zm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali023644f3bfc", MAC:"ce:50:6e:6f:14:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:06.041712 containerd[1503]: 2026-04-17 23:52:06.012 [INFO][4019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826" Namespace="calico-system" Pod="calico-apiserver-6545b6d484-99zm2" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:06.060424 systemd[1]: Started cri-containerd-8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60.scope - libcontainer container 8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60. Apr 17 23:52:06.085603 containerd[1503]: time="2026-04-17T23:52:06.084938922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:06.085603 containerd[1503]: time="2026-04-17T23:52:06.085030379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:06.085603 containerd[1503]: time="2026-04-17T23:52:06.085049477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.085603 containerd[1503]: time="2026-04-17T23:52:06.085197920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.130255 containerd[1503]: time="2026-04-17T23:52:06.129700423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:06.130255 containerd[1503]: time="2026-04-17T23:52:06.130048879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:06.130255 containerd[1503]: time="2026-04-17T23:52:06.130170322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.139381 systemd[1]: Started cri-containerd-6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae.scope - libcontainer container 6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae. Apr 17 23:52:06.148051 systemd-networkd[1433]: cali6a668c8560e: Link UP Apr 17 23:52:06.151308 systemd-networkd[1433]: cali6a668c8560e: Gained carrier Apr 17 23:52:06.157171 containerd[1503]: time="2026-04-17T23:52:06.153391343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:04.871 [ERROR][4020] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:04.949 [INFO][4020] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0 whisker-6fdcf6b9f6- calico-system 1254b0aa-66e4-4dd1-bb91-02f716eb390c 934 0 2026-04-17 23:51:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fdcf6b9f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com whisker-6fdcf6b9f6-dqfx4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6a668c8560e [] [] }} ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:04.949 [INFO][4020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.384 [INFO][4111] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.413 [INFO][4111] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060b960), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"whisker-6fdcf6b9f6-dqfx4", "timestamp":"2026-04-17 23:52:05.383967323 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039c000)} Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.414 [INFO][4111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.946 [INFO][4111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.946 [INFO][4111] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.961 [INFO][4111] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:05.984 [INFO][4111] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.010 [INFO][4111] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.038 [INFO][4111] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.055 [INFO][4111] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.057 [INFO][4111] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.072 [INFO][4111] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.091 [INFO][4111] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.124 [INFO][4111] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.7/26] block=192.168.91.0/26 handle="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.125 [INFO][4111] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.7/26] handle="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.125 [INFO][4111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:06.201013 containerd[1503]: 2026-04-17 23:52:06.125 [INFO][4111] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.7/26] IPv6=[] ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.138 [INFO][4020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0", GenerateName:"whisker-6fdcf6b9f6-", Namespace:"calico-system", SelfLink:"", UID:"1254b0aa-66e4-4dd1-bb91-02f716eb390c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdcf6b9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6fdcf6b9f6-dqfx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6a668c8560e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.144 [INFO][4020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.7/32] ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.144 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a668c8560e ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.150 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.157 [INFO][4020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0", GenerateName:"whisker-6fdcf6b9f6-", Namespace:"calico-system", SelfLink:"", UID:"1254b0aa-66e4-4dd1-bb91-02f716eb390c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdcf6b9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec", Pod:"whisker-6fdcf6b9f6-dqfx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6a668c8560e", MAC:"f6:21:b1:a1:21:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:06.202035 containerd[1503]: 2026-04-17 23:52:06.195 [INFO][4020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Namespace="calico-system" Pod="whisker-6fdcf6b9f6-dqfx4" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:06.225392 systemd[1]: Started cri-containerd-68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826.scope - libcontainer container 68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826. Apr 17 23:52:06.287937 containerd[1503]: time="2026-04-17T23:52:06.287742529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:06.287937 containerd[1503]: time="2026-04-17T23:52:06.287852706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:06.287937 containerd[1503]: time="2026-04-17T23:52:06.287876857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.290049 containerd[1503]: time="2026-04-17T23:52:06.288674402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:06.342819 systemd[1]: Started cri-containerd-98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec.scope - libcontainer container 98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec. Apr 17 23:52:06.383410 containerd[1503]: time="2026-04-17T23:52:06.382229172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wmjzl,Uid:b54c39e6-5cc3-458a-a4e5-3a986945460e,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae\"" Apr 17 23:52:06.394560 containerd[1503]: time="2026-04-17T23:52:06.394467524Z" level=info msg="CreateContainer within sandbox \"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:52:06.431854 containerd[1503]: time="2026-04-17T23:52:06.431626956Z" level=info msg="CreateContainer within sandbox \"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6731e720909a96adb6a67d9ddd4e5031dde255da6bc52d93afe7df19de497fe\"" Apr 17 23:52:06.435128 containerd[1503]: time="2026-04-17T23:52:06.434525836Z" level=info msg="StartContainer for \"a6731e720909a96adb6a67d9ddd4e5031dde255da6bc52d93afe7df19de497fe\"" Apr 17 23:52:06.517499 systemd[1]: run-containerd-runc-k8s.io-530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141-runc.SMapjB.mount: Deactivated successfully. Apr 17 23:52:06.550344 containerd[1503]: time="2026-04-17T23:52:06.550204632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcb7j,Uid:fa2cb226-7c96-408f-a84d-5eaad54bb710,Namespace:calico-system,Attempt:1,} returns sandbox id \"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f\"" Apr 17 23:52:06.562487 containerd[1503]: time="2026-04-17T23:52:06.562385575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:52:06.600249 containerd[1503]: time="2026-04-17T23:52:06.599317008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b7569c47-gw5gc,Uid:95dcdccd-c202-419d-8c14-1309ffe304e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141\"" Apr 17 23:52:06.644063 containerd[1503]: time="2026-04-17T23:52:06.642641355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6lcgw,Uid:813387bb-b4f1-4fed-b573-d52cb41c7a62,Namespace:calico-system,Attempt:1,} returns sandbox id \"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15\"" Apr 17 23:52:06.694157 containerd[1503]: time="2026-04-17T23:52:06.692166214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-99zm2,Uid:5c0d7228-c271-42fa-829b-e0c514b077bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826\"" Apr 17 23:52:06.702760 systemd[1]: Started cri-containerd-a6731e720909a96adb6a67d9ddd4e5031dde255da6bc52d93afe7df19de497fe.scope - libcontainer container a6731e720909a96adb6a67d9ddd4e5031dde255da6bc52d93afe7df19de497fe. Apr 17 23:52:06.776297 containerd[1503]: time="2026-04-17T23:52:06.776243417Z" level=info msg="StartContainer for \"a6731e720909a96adb6a67d9ddd4e5031dde255da6bc52d93afe7df19de497fe\" returns successfully" Apr 17 23:52:06.844195 systemd-networkd[1433]: calif6cb244a7c4: Gained IPv6LL Apr 17 23:52:06.907396 systemd-networkd[1433]: cali8df6116539b: Gained IPv6LL Apr 17 23:52:06.940791 containerd[1503]: time="2026-04-17T23:52:06.940648424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6545b6d484-khkvj,Uid:ce9100b6-6848-4690-a4ca-697384ee1ee9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60\"" Apr 17 23:52:06.964911 containerd[1503]: time="2026-04-17T23:52:06.963457803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdcf6b9f6-dqfx4,Uid:1254b0aa-66e4-4dd1-bb91-02f716eb390c,Namespace:calico-system,Attempt:1,} returns sandbox id \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\"" Apr 17 23:52:07.101596 systemd-networkd[1433]: calibd1ec549dc0: Gained IPv6LL Apr 17 23:52:07.163432 systemd-networkd[1433]: cali6829befad52: Gained IPv6LL Apr 17 23:52:07.292256 systemd-networkd[1433]: cali95df4c8eff8: Gained IPv6LL Apr 17 23:52:07.492280 kubelet[2665]: I0417 23:52:07.490008 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wmjzl" podStartSLOduration=48.489978981 podStartE2EDuration="48.489978981s" podCreationTimestamp="2026-04-17 23:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:52:07.465489975 +0000 UTC m=+53.962576415" watchObservedRunningTime="2026-04-17 23:52:07.489978981 +0000 UTC m=+53.987065425" Apr 17 23:52:07.548447 systemd-networkd[1433]: cali023644f3bfc: Gained IPv6LL Apr 17 23:52:07.611382 systemd-networkd[1433]: cali6a668c8560e: Gained IPv6LL Apr 17 23:52:07.686664 kernel: calico-node[4550]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:52:08.887377 systemd-networkd[1433]: vxlan.calico: Link UP Apr 17 23:52:08.887427 systemd-networkd[1433]: vxlan.calico: Gained carrier Apr 17 23:52:09.186991 containerd[1503]: time="2026-04-17T23:52:09.179416242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:52:09.232559 containerd[1503]: time="2026-04-17T23:52:09.232491266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.670020473s" Apr 17 23:52:09.232559 containerd[1503]: time="2026-04-17T23:52:09.232563379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:52:09.249149 containerd[1503]: time="2026-04-17T23:52:09.244497746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:09.249149 containerd[1503]: time="2026-04-17T23:52:09.245865977Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:09.270356 containerd[1503]: time="2026-04-17T23:52:09.270284397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:09.285245 containerd[1503]: time="2026-04-17T23:52:09.285175968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:52:09.355456 containerd[1503]: time="2026-04-17T23:52:09.355392943Z" level=info msg="CreateContainer within sandbox \"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:52:09.456254 containerd[1503]: time="2026-04-17T23:52:09.454941507Z" level=info msg="CreateContainer within sandbox \"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"43755ae5a6edec3375df470a1c08eb9c3117f08e926a8ea4c910e1f26c9ab79a\"" Apr 17 23:52:09.457252 containerd[1503]: time="2026-04-17T23:52:09.457216972Z" level=info msg="StartContainer for \"43755ae5a6edec3375df470a1c08eb9c3117f08e926a8ea4c910e1f26c9ab79a\"" Apr 17 23:52:09.698034 systemd[1]: Started cri-containerd-43755ae5a6edec3375df470a1c08eb9c3117f08e926a8ea4c910e1f26c9ab79a.scope - libcontainer container 43755ae5a6edec3375df470a1c08eb9c3117f08e926a8ea4c910e1f26c9ab79a. Apr 17 23:52:09.822947 containerd[1503]: time="2026-04-17T23:52:09.822348826Z" level=info msg="StartContainer for \"43755ae5a6edec3375df470a1c08eb9c3117f08e926a8ea4c910e1f26c9ab79a\" returns successfully" Apr 17 23:52:10.687453 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Apr 17 23:52:12.966381 containerd[1503]: time="2026-04-17T23:52:12.966257507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:12.967715 containerd[1503]: time="2026-04-17T23:52:12.967631410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:52:12.969084 containerd[1503]: time="2026-04-17T23:52:12.968983287Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:12.972808 containerd[1503]: time="2026-04-17T23:52:12.972739233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:12.974300 containerd[1503]: time="2026-04-17T23:52:12.974068875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.688810234s" Apr 17 23:52:12.974300 containerd[1503]: time="2026-04-17T23:52:12.974120772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:52:12.980754 containerd[1503]: time="2026-04-17T23:52:12.980707057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:52:13.009689 containerd[1503]: time="2026-04-17T23:52:13.009388272Z" level=info msg="CreateContainer within sandbox \"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:52:13.027817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207879304.mount: Deactivated successfully. Apr 17 23:52:13.034037 containerd[1503]: time="2026-04-17T23:52:13.031414859Z" level=info msg="CreateContainer within sandbox \"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fdc5f2607a03ecbcd2b4908a5eddc73ee8bdacfe1ab8428d77e314b6bec4fab5\"" Apr 17 23:52:13.034037 containerd[1503]: time="2026-04-17T23:52:13.033663303Z" level=info msg="StartContainer for \"fdc5f2607a03ecbcd2b4908a5eddc73ee8bdacfe1ab8428d77e314b6bec4fab5\"" Apr 17 23:52:13.077411 systemd[1]: Started cri-containerd-fdc5f2607a03ecbcd2b4908a5eddc73ee8bdacfe1ab8428d77e314b6bec4fab5.scope - libcontainer container fdc5f2607a03ecbcd2b4908a5eddc73ee8bdacfe1ab8428d77e314b6bec4fab5. Apr 17 23:52:13.149131 containerd[1503]: time="2026-04-17T23:52:13.149055566Z" level=info msg="StartContainer for \"fdc5f2607a03ecbcd2b4908a5eddc73ee8bdacfe1ab8428d77e314b6bec4fab5\" returns successfully" Apr 17 23:52:13.777992 containerd[1503]: time="2026-04-17T23:52:13.777692988Z" level=info msg="StopPodSandbox for \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\"" Apr 17 23:52:13.817834 kubelet[2665]: I0417 23:52:13.817337 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9b7569c47-gw5gc" podStartSLOduration=30.450860485 podStartE2EDuration="36.817302208s" podCreationTimestamp="2026-04-17 23:51:37 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.613074054 +0000 UTC m=+53.110160486" lastFinishedPulling="2026-04-17 23:52:12.979515772 +0000 UTC m=+59.476602209" observedRunningTime="2026-04-17 23:52:13.815212294 +0000 UTC m=+60.312298745" watchObservedRunningTime="2026-04-17 23:52:13.817302208 +0000 UTC m=+60.314388659" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.132 [WARNING][4879] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0", GenerateName:"calico-kube-controllers-9b7569c47-", Namespace:"calico-system", SelfLink:"", UID:"95dcdccd-c202-419d-8c14-1309ffe304e6", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b7569c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141", Pod:"calico-kube-controllers-9b7569c47-gw5gc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6829befad52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.135 [INFO][4879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.135 [INFO][4879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" iface="eth0" netns="" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.135 [INFO][4879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.135 [INFO][4879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.336 [INFO][4896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.340 [INFO][4896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.341 [INFO][4896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.355 [WARNING][4896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.355 [INFO][4896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.358 [INFO][4896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:14.366576 containerd[1503]: 2026-04-17 23:52:14.362 [INFO][4879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.372503 containerd[1503]: time="2026-04-17T23:52:14.367086266Z" level=info msg="TearDown network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" successfully" Apr 17 23:52:14.372503 containerd[1503]: time="2026-04-17T23:52:14.367637855Z" level=info msg="StopPodSandbox for \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" returns successfully" Apr 17 23:52:14.414931 containerd[1503]: time="2026-04-17T23:52:14.414568162Z" level=info msg="RemovePodSandbox for \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\"" Apr 17 23:52:14.419828 containerd[1503]: time="2026-04-17T23:52:14.419758595Z" level=info msg="Forcibly stopping sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\"" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.505 [WARNING][4911] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0", GenerateName:"calico-kube-controllers-9b7569c47-", Namespace:"calico-system", SelfLink:"", UID:"95dcdccd-c202-419d-8c14-1309ffe304e6", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b7569c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"530aa391ca0931dc45ca1ba87985dc1c8b91de0a53c1b1db383f6d5bcaeb1141", Pod:"calico-kube-controllers-9b7569c47-gw5gc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6829befad52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.505 [INFO][4911] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.505 [INFO][4911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" iface="eth0" netns="" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.505 [INFO][4911] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.505 [INFO][4911] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.556 [INFO][4918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.556 [INFO][4918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.556 [INFO][4918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.569 [WARNING][4918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.569 [INFO][4918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" HandleID="k8s-pod-network.87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--kube--controllers--9b7569c47--gw5gc-eth0" Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.573 [INFO][4918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:14.582490 containerd[1503]: 2026-04-17 23:52:14.577 [INFO][4911] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23" Apr 17 23:52:14.583724 containerd[1503]: time="2026-04-17T23:52:14.582618267Z" level=info msg="TearDown network for sandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" successfully" Apr 17 23:52:14.601939 containerd[1503]: time="2026-04-17T23:52:14.600962681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:14.609696 containerd[1503]: time="2026-04-17T23:52:14.608339792Z" level=info msg="RemovePodSandbox \"87fdeb28539ba4809a02dc55cd092225b26d0d59c74c8c2d5abe0350e6253b23\" returns successfully" Apr 17 23:52:14.614842 containerd[1503]: time="2026-04-17T23:52:14.612770058Z" level=info msg="StopPodSandbox for \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\"" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.722 [WARNING][4933] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"ce9100b6-6848-4690-a4ca-697384ee1ee9", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60", Pod:"calico-apiserver-6545b6d484-khkvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali95df4c8eff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.722 [INFO][4933] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.722 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" iface="eth0" netns="" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.722 [INFO][4933] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.722 [INFO][4933] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.857 [INFO][4944] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.858 [INFO][4944] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.858 [INFO][4944] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.884 [WARNING][4944] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.884 [INFO][4944] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.891 [INFO][4944] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:14.897117 containerd[1503]: 2026-04-17 23:52:14.893 [INFO][4933] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:14.898640 containerd[1503]: time="2026-04-17T23:52:14.898420777Z" level=info msg="TearDown network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" successfully" Apr 17 23:52:14.898640 containerd[1503]: time="2026-04-17T23:52:14.898515419Z" level=info msg="StopPodSandbox for \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" returns successfully" Apr 17 23:52:14.899225 containerd[1503]: time="2026-04-17T23:52:14.899082630Z" level=info msg="RemovePodSandbox for \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\"" Apr 17 23:52:14.900024 containerd[1503]: time="2026-04-17T23:52:14.899987183Z" level=info msg="Forcibly stopping sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\"" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:14.999 [WARNING][4958] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"ce9100b6-6848-4690-a4ca-697384ee1ee9", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60", Pod:"calico-apiserver-6545b6d484-khkvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali95df4c8eff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.001 [INFO][4958] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.001 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" iface="eth0" netns="" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.001 [INFO][4958] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.001 [INFO][4958] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.095 [INFO][4970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.096 [INFO][4970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.096 [INFO][4970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.116 [WARNING][4970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.116 [INFO][4970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" HandleID="k8s-pod-network.323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--khkvj-eth0" Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.122 [INFO][4970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:15.132464 containerd[1503]: 2026-04-17 23:52:15.126 [INFO][4958] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68" Apr 17 23:52:15.132464 containerd[1503]: time="2026-04-17T23:52:15.131620090Z" level=info msg="TearDown network for sandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" successfully" Apr 17 23:52:15.179070 containerd[1503]: time="2026-04-17T23:52:15.178854603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:15.179070 containerd[1503]: time="2026-04-17T23:52:15.179002898Z" level=info msg="RemovePodSandbox \"323c4e4d8f8777e42a266e36cc4db6ae9bb3c1a78fbbc9103dbce8d136810c68\" returns successfully" Apr 17 23:52:15.181674 containerd[1503]: time="2026-04-17T23:52:15.181223573Z" level=info msg="StopPodSandbox for \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\"" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.264 [WARNING][4992] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0", GenerateName:"whisker-6fdcf6b9f6-", Namespace:"calico-system", SelfLink:"", UID:"1254b0aa-66e4-4dd1-bb91-02f716eb390c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdcf6b9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec", Pod:"whisker-6fdcf6b9f6-dqfx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6a668c8560e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.266 [INFO][4992] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.266 [INFO][4992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" iface="eth0" netns="" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.266 [INFO][4992] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.266 [INFO][4992] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.314 [INFO][5000] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.314 [INFO][5000] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.314 [INFO][5000] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.337 [WARNING][5000] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.337 [INFO][5000] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.342 [INFO][5000] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:15.366615 containerd[1503]: 2026-04-17 23:52:15.353 [INFO][4992] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.366615 containerd[1503]: time="2026-04-17T23:52:15.366695284Z" level=info msg="TearDown network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" successfully" Apr 17 23:52:15.366615 containerd[1503]: time="2026-04-17T23:52:15.366735562Z" level=info msg="StopPodSandbox for \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" returns successfully" Apr 17 23:52:15.368897 containerd[1503]: time="2026-04-17T23:52:15.367625900Z" level=info msg="RemovePodSandbox for \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\"" Apr 17 23:52:15.368897 containerd[1503]: time="2026-04-17T23:52:15.367674479Z" level=info msg="Forcibly stopping sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\"" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.480 [WARNING][5014] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0", GenerateName:"whisker-6fdcf6b9f6-", Namespace:"calico-system", SelfLink:"", UID:"1254b0aa-66e4-4dd1-bb91-02f716eb390c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdcf6b9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec", Pod:"whisker-6fdcf6b9f6-dqfx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6a668c8560e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.481 [INFO][5014] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.481 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" iface="eth0" netns="" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.482 [INFO][5014] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.482 [INFO][5014] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.565 [INFO][5021] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.565 [INFO][5021] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.565 [INFO][5021] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.583 [WARNING][5021] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.583 [INFO][5021] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" HandleID="k8s-pod-network.6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.589 [INFO][5021] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:15.606172 containerd[1503]: 2026-04-17 23:52:15.597 [INFO][5014] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb" Apr 17 23:52:15.608445 containerd[1503]: time="2026-04-17T23:52:15.606225694Z" level=info msg="TearDown network for sandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" successfully" Apr 17 23:52:15.639565 containerd[1503]: time="2026-04-17T23:52:15.639490196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:15.639866 containerd[1503]: time="2026-04-17T23:52:15.639707906Z" level=info msg="RemovePodSandbox \"6548fc13fecb692b6af85d5ecde45dba4a1ce1d78eeedae6cf4399b611ae4ecb\" returns successfully" Apr 17 23:52:15.643087 containerd[1503]: time="2026-04-17T23:52:15.643008162Z" level=info msg="StopPodSandbox for \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\"" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.821 [WARNING][5035] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"813387bb-b4f1-4fed-b573-d52cb41c7a62", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15", Pod:"goldmane-9f7667bb8-6lcgw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd1ec549dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.822 [INFO][5035] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.822 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" iface="eth0" netns="" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.822 [INFO][5035] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.822 [INFO][5035] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.929 [INFO][5042] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.930 [INFO][5042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.930 [INFO][5042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.948 [WARNING][5042] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.948 [INFO][5042] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.951 [INFO][5042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:15.959849 containerd[1503]: 2026-04-17 23:52:15.955 [INFO][5035] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:15.961209 containerd[1503]: time="2026-04-17T23:52:15.961165329Z" level=info msg="TearDown network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" successfully" Apr 17 23:52:15.961648 containerd[1503]: time="2026-04-17T23:52:15.961589863Z" level=info msg="StopPodSandbox for \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" returns successfully" Apr 17 23:52:15.962761 containerd[1503]: time="2026-04-17T23:52:15.962442698Z" level=info msg="RemovePodSandbox for \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\"" Apr 17 23:52:15.962761 containerd[1503]: time="2026-04-17T23:52:15.962487280Z" level=info msg="Forcibly stopping sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\"" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.062 [WARNING][5061] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"813387bb-b4f1-4fed-b573-d52cb41c7a62", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15", Pod:"goldmane-9f7667bb8-6lcgw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd1ec549dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.062 [INFO][5061] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.062 [INFO][5061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" iface="eth0" netns="" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.063 [INFO][5061] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.063 [INFO][5061] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.130 [INFO][5068] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.130 [INFO][5068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.131 [INFO][5068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.141 [WARNING][5068] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.141 [INFO][5068] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" HandleID="k8s-pod-network.12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Workload="srv--mc367.gb1.brightbox.com-k8s-goldmane--9f7667bb8--6lcgw-eth0" Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.144 [INFO][5068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:16.150947 containerd[1503]: 2026-04-17 23:52:16.146 [INFO][5061] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621" Apr 17 23:52:16.152356 containerd[1503]: time="2026-04-17T23:52:16.151862917Z" level=info msg="TearDown network for sandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" successfully" Apr 17 23:52:16.157839 containerd[1503]: time="2026-04-17T23:52:16.157797173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:16.158247 containerd[1503]: time="2026-04-17T23:52:16.158020053Z" level=info msg="RemovePodSandbox \"12acafc3246287e3da6ace92e2b6525220ae7a5c2e71b2df1e879deefa2c9621\" returns successfully" Apr 17 23:52:16.174031 containerd[1503]: time="2026-04-17T23:52:16.173973621Z" level=info msg="StopPodSandbox for \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\"" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.278 [WARNING][5083] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa2cb226-7c96-408f-a84d-5eaad54bb710", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f", Pod:"csi-node-driver-wcb7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6cb244a7c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.279 [INFO][5083] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.279 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" iface="eth0" netns="" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.279 [INFO][5083] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.279 [INFO][5083] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.327 [INFO][5090] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.328 [INFO][5090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.328 [INFO][5090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.339 [WARNING][5090] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.339 [INFO][5090] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.341 [INFO][5090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:16.349542 containerd[1503]: 2026-04-17 23:52:16.344 [INFO][5083] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.349542 containerd[1503]: time="2026-04-17T23:52:16.348980627Z" level=info msg="TearDown network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" successfully" Apr 17 23:52:16.349542 containerd[1503]: time="2026-04-17T23:52:16.349188659Z" level=info msg="StopPodSandbox for \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" returns successfully" Apr 17 23:52:16.372916 containerd[1503]: time="2026-04-17T23:52:16.372834493Z" level=info msg="RemovePodSandbox for \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\"" Apr 17 23:52:16.372916 containerd[1503]: time="2026-04-17T23:52:16.372898695Z" level=info msg="Forcibly stopping sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\"" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.487 [WARNING][5103] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa2cb226-7c96-408f-a84d-5eaad54bb710", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f", Pod:"csi-node-driver-wcb7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6cb244a7c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.489 [INFO][5103] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.489 [INFO][5103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" iface="eth0" netns="" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.489 [INFO][5103] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.489 [INFO][5103] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.557 [INFO][5111] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.558 [INFO][5111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.558 [INFO][5111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.579 [WARNING][5111] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.579 [INFO][5111] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" HandleID="k8s-pod-network.354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Workload="srv--mc367.gb1.brightbox.com-k8s-csi--node--driver--wcb7j-eth0" Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.582 [INFO][5111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:16.592962 containerd[1503]: 2026-04-17 23:52:16.588 [INFO][5103] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11" Apr 17 23:52:16.592962 containerd[1503]: time="2026-04-17T23:52:16.592958560Z" level=info msg="TearDown network for sandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" successfully" Apr 17 23:52:16.650689 containerd[1503]: time="2026-04-17T23:52:16.650530219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:16.652122 containerd[1503]: time="2026-04-17T23:52:16.651955857Z" level=info msg="RemovePodSandbox \"354d373ea6fee4babff6a2b0500ad736869972b55ad90b81f1fe8a27e9531a11\" returns successfully" Apr 17 23:52:16.680177 containerd[1503]: time="2026-04-17T23:52:16.680065365Z" level=info msg="StopPodSandbox for \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\"" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.779 [WARNING][5126] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"5c0d7228-c271-42fa-829b-e0c514b077bb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826", Pod:"calico-apiserver-6545b6d484-99zm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali023644f3bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.779 [INFO][5126] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.779 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" iface="eth0" netns="" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.779 [INFO][5126] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.779 [INFO][5126] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.860 [INFO][5133] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.861 [INFO][5133] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.861 [INFO][5133] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.876 [WARNING][5133] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.876 [INFO][5133] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.881 [INFO][5133] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:16.890971 containerd[1503]: 2026-04-17 23:52:16.887 [INFO][5126] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:16.893119 containerd[1503]: time="2026-04-17T23:52:16.892980567Z" level=info msg="TearDown network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" successfully" Apr 17 23:52:16.893596 containerd[1503]: time="2026-04-17T23:52:16.893213729Z" level=info msg="StopPodSandbox for \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" returns successfully" Apr 17 23:52:16.945365 containerd[1503]: time="2026-04-17T23:52:16.944000414Z" level=info msg="RemovePodSandbox for \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\"" Apr 17 23:52:16.945365 containerd[1503]: time="2026-04-17T23:52:16.944066613Z" level=info msg="Forcibly stopping sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\"" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.056 [WARNING][5147] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0", GenerateName:"calico-apiserver-6545b6d484-", Namespace:"calico-system", SelfLink:"", UID:"5c0d7228-c271-42fa-829b-e0c514b077bb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6545b6d484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826", Pod:"calico-apiserver-6545b6d484-99zm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali023644f3bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.057 [INFO][5147] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.057 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" iface="eth0" netns="" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.057 [INFO][5147] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.057 [INFO][5147] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.106 [INFO][5154] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.107 [INFO][5154] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.107 [INFO][5154] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.120 [WARNING][5154] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.120 [INFO][5154] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" HandleID="k8s-pod-network.ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Workload="srv--mc367.gb1.brightbox.com-k8s-calico--apiserver--6545b6d484--99zm2-eth0" Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.124 [INFO][5154] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:17.132258 containerd[1503]: 2026-04-17 23:52:17.128 [INFO][5147] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd" Apr 17 23:52:17.132258 containerd[1503]: time="2026-04-17T23:52:17.131822709Z" level=info msg="TearDown network for sandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" successfully" Apr 17 23:52:17.147722 containerd[1503]: time="2026-04-17T23:52:17.146047803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:17.147951 containerd[1503]: time="2026-04-17T23:52:17.147906982Z" level=info msg="RemovePodSandbox \"ccc242cfdf78963e7260475b9392d890cd4156617a412f5bed9b1b7780a561bd\" returns successfully" Apr 17 23:52:17.164344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720880802.mount: Deactivated successfully. Apr 17 23:52:17.175465 containerd[1503]: time="2026-04-17T23:52:17.175403279Z" level=info msg="StopPodSandbox for \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\"" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.332 [WARNING][5173] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b54c39e6-5cc3-458a-a4e5-3a986945460e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae", Pod:"coredns-7d764666f9-wmjzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8df6116539b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.334 [INFO][5173] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.334 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" iface="eth0" netns="" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.334 [INFO][5173] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.334 [INFO][5173] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.405 [INFO][5180] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.405 [INFO][5180] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.405 [INFO][5180] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.422 [WARNING][5180] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.422 [INFO][5180] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.425 [INFO][5180] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:17.436500 containerd[1503]: 2026-04-17 23:52:17.430 [INFO][5173] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.439165 containerd[1503]: time="2026-04-17T23:52:17.436570321Z" level=info msg="TearDown network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" successfully" Apr 17 23:52:17.439165 containerd[1503]: time="2026-04-17T23:52:17.436625438Z" level=info msg="StopPodSandbox for \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" returns successfully" Apr 17 23:52:17.463817 containerd[1503]: time="2026-04-17T23:52:17.463758776Z" level=info msg="RemovePodSandbox for \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\"" Apr 17 23:52:17.464617 containerd[1503]: time="2026-04-17T23:52:17.464387093Z" level=info msg="Forcibly stopping sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\"" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.555 [WARNING][5194] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b54c39e6-5cc3-458a-a4e5-3a986945460e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"6b2f29af789a534f1ae6918c6808ce0e1d747432a4310e23068f5f5f285373ae", Pod:"coredns-7d764666f9-wmjzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8df6116539b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.556 [INFO][5194] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.556 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" iface="eth0" netns="" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.556 [INFO][5194] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.556 [INFO][5194] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.617 [INFO][5201] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.617 [INFO][5201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.619 [INFO][5201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.633 [WARNING][5201] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.633 [INFO][5201] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" HandleID="k8s-pod-network.de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--wmjzl-eth0" Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.680 [INFO][5201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:17.690358 containerd[1503]: 2026-04-17 23:52:17.685 [INFO][5194] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034" Apr 17 23:52:17.690358 containerd[1503]: time="2026-04-17T23:52:17.689305034Z" level=info msg="TearDown network for sandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" successfully" Apr 17 23:52:17.724612 containerd[1503]: time="2026-04-17T23:52:17.724527934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:17.724883 containerd[1503]: time="2026-04-17T23:52:17.724634181Z" level=info msg="RemovePodSandbox \"de2724a438beb030d4d4aec6fba103f52722bba86e081489a67a49eaf461e034\" returns successfully" Apr 17 23:52:18.148262 containerd[1503]: time="2026-04-17T23:52:18.148149039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:18.150997 containerd[1503]: time="2026-04-17T23:52:18.150919404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:52:18.153399 containerd[1503]: time="2026-04-17T23:52:18.153321992Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:18.170967 containerd[1503]: time="2026-04-17T23:52:18.170874128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:18.173498 containerd[1503]: time="2026-04-17T23:52:18.173451257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.19269186s" Apr 17 23:52:18.173628 containerd[1503]: time="2026-04-17T23:52:18.173534108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:52:18.193971 containerd[1503]: time="2026-04-17T23:52:18.193482780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:52:18.275514 containerd[1503]: time="2026-04-17T23:52:18.275452185Z" level=info msg="CreateContainer within sandbox \"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:52:18.296160 containerd[1503]: time="2026-04-17T23:52:18.296086009Z" level=info msg="CreateContainer within sandbox \"76b6843780b609c32c1efe02ecb2e2e8d7743249b85d2f4e674bd1e0173e4a15\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14\"" Apr 17 23:52:18.297777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189639155.mount: Deactivated successfully. Apr 17 23:52:18.305497 containerd[1503]: time="2026-04-17T23:52:18.305202824Z" level=info msg="StartContainer for \"2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14\"" Apr 17 23:52:18.409459 systemd[1]: Started cri-containerd-2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14.scope - libcontainer container 2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14. Apr 17 23:52:18.523808 containerd[1503]: time="2026-04-17T23:52:18.523747600Z" level=info msg="StartContainer for \"2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14\" returns successfully" Apr 17 23:52:18.771919 containerd[1503]: time="2026-04-17T23:52:18.771290687Z" level=info msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.844 [INFO][5256] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.845 [INFO][5256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" iface="eth0" netns="/var/run/netns/cni-21eb088c-70c2-a3b8-0d0a-6a7c892325d7" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.846 [INFO][5256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" iface="eth0" netns="/var/run/netns/cni-21eb088c-70c2-a3b8-0d0a-6a7c892325d7" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.851 [INFO][5256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" iface="eth0" netns="/var/run/netns/cni-21eb088c-70c2-a3b8-0d0a-6a7c892325d7" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.851 [INFO][5256] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.851 [INFO][5256] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.913 [INFO][5264] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.913 [INFO][5264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.913 [INFO][5264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.924 [WARNING][5264] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.924 [INFO][5264] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.926 [INFO][5264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:18.931861 containerd[1503]: 2026-04-17 23:52:18.929 [INFO][5256] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:52:18.936401 containerd[1503]: time="2026-04-17T23:52:18.932555721Z" level=info msg="TearDown network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" successfully" Apr 17 23:52:18.936401 containerd[1503]: time="2026-04-17T23:52:18.932596951Z" level=info msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" returns successfully" Apr 17 23:52:18.943866 systemd[1]: run-netns-cni\x2d21eb088c\x2d70c2\x2da3b8\x2d0d0a\x2d6a7c892325d7.mount: Deactivated successfully. Apr 17 23:52:18.964820 containerd[1503]: time="2026-04-17T23:52:18.964752319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-49g4d,Uid:3f508cf3-d08e-41ad-9a03-91094ecb3e48,Namespace:kube-system,Attempt:1,}" Apr 17 23:52:19.386970 systemd-networkd[1433]: cali48ff71350e1: Link UP Apr 17 23:52:19.388491 systemd-networkd[1433]: cali48ff71350e1: Gained carrier Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.109 [INFO][5270] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0 coredns-7d764666f9- kube-system 3f508cf3-d08e-41ad-9a03-91094ecb3e48 1037 0 2026-04-17 23:51:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com coredns-7d764666f9-49g4d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48ff71350e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.112 [INFO][5270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.266 [INFO][5301] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" HandleID="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.296 [INFO][5301] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" HandleID="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e440), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"coredns-7d764666f9-49g4d", "timestamp":"2026-04-17 23:52:19.266733713 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001146e0)} Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.296 [INFO][5301] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.296 [INFO][5301] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.297 [INFO][5301] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.317 [INFO][5301] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.334 [INFO][5301] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.347 [INFO][5301] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.350 [INFO][5301] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.353 [INFO][5301] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.353 [INFO][5301] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.356 [INFO][5301] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835 Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.362 [INFO][5301] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.373 [INFO][5301] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.8/26] block=192.168.91.0/26 handle="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.374 [INFO][5301] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.8/26] handle="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.374 [INFO][5301] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:19.455295 containerd[1503]: 2026-04-17 23:52:19.374 [INFO][5301] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.8/26] IPv6=[] ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" HandleID="k8s-pod-network.a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.457777 containerd[1503]: 2026-04-17 23:52:19.377 [INFO][5270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3f508cf3-d08e-41ad-9a03-91094ecb3e48", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-49g4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48ff71350e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:19.457777 containerd[1503]: 2026-04-17 23:52:19.377 [INFO][5270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.8/32] ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.457777 containerd[1503]: 2026-04-17 23:52:19.377 [INFO][5270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48ff71350e1 ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.457777 containerd[1503]: 2026-04-17 23:52:19.413 [INFO][5270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.457777 containerd[1503]: 2026-04-17 23:52:19.419 [INFO][5270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3f508cf3-d08e-41ad-9a03-91094ecb3e48", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835", Pod:"coredns-7d764666f9-49g4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48ff71350e1", MAC:"e2:9b:6d:f8:dc:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:19.458762 containerd[1503]: 2026-04-17 23:52:19.446 [INFO][5270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835" Namespace="kube-system" Pod="coredns-7d764666f9-49g4d" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:52:19.489199 kubelet[2665]: I0417 23:52:19.473599 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-6lcgw" podStartSLOduration=31.91978429 podStartE2EDuration="43.44585475s" podCreationTimestamp="2026-04-17 23:51:36 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.658833894 +0000 UTC m=+53.155920331" lastFinishedPulling="2026-04-17 23:52:18.184904343 +0000 UTC m=+64.681990791" observedRunningTime="2026-04-17 23:52:19.149708468 +0000 UTC m=+65.646794920" watchObservedRunningTime="2026-04-17 23:52:19.44585475 +0000 UTC m=+65.942941188" Apr 17 23:52:19.553431 containerd[1503]: time="2026-04-17T23:52:19.548819347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:19.553431 containerd[1503]: time="2026-04-17T23:52:19.552115265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:19.553431 containerd[1503]: time="2026-04-17T23:52:19.552229663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:19.553431 containerd[1503]: time="2026-04-17T23:52:19.552443752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:19.622393 systemd[1]: Started cri-containerd-a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835.scope - libcontainer container a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835. Apr 17 23:52:19.737828 containerd[1503]: time="2026-04-17T23:52:19.737690869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-49g4d,Uid:3f508cf3-d08e-41ad-9a03-91094ecb3e48,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835\"" Apr 17 23:52:19.761220 containerd[1503]: time="2026-04-17T23:52:19.760907615Z" level=info msg="CreateContainer within sandbox \"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:52:19.791630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257443732.mount: Deactivated successfully. Apr 17 23:52:19.800916 containerd[1503]: time="2026-04-17T23:52:19.800667203Z" level=info msg="CreateContainer within sandbox \"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38bea281eddc9374ee98bf1b7861d1870eb2faf0351a28e87d562c69f76aa268\"" Apr 17 23:52:19.803172 containerd[1503]: time="2026-04-17T23:52:19.803114496Z" level=info msg="StartContainer for \"38bea281eddc9374ee98bf1b7861d1870eb2faf0351a28e87d562c69f76aa268\"" Apr 17 23:52:19.869460 systemd[1]: Started cri-containerd-38bea281eddc9374ee98bf1b7861d1870eb2faf0351a28e87d562c69f76aa268.scope - libcontainer container 38bea281eddc9374ee98bf1b7861d1870eb2faf0351a28e87d562c69f76aa268. Apr 17 23:52:19.920243 containerd[1503]: time="2026-04-17T23:52:19.920059740Z" level=info msg="StartContainer for \"38bea281eddc9374ee98bf1b7861d1870eb2faf0351a28e87d562c69f76aa268\" returns successfully" Apr 17 23:52:20.176554 kubelet[2665]: I0417 23:52:20.176478 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-49g4d" podStartSLOduration=61.176459759 podStartE2EDuration="1m1.176459759s" podCreationTimestamp="2026-04-17 23:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:52:20.175602416 +0000 UTC m=+66.672688872" watchObservedRunningTime="2026-04-17 23:52:20.176459759 +0000 UTC m=+66.673546205" Apr 17 23:52:20.298069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917442178.mount: Deactivated successfully. Apr 17 23:52:21.180233 systemd-networkd[1433]: cali48ff71350e1: Gained IPv6LL Apr 17 23:52:24.072978 containerd[1503]: time="2026-04-17T23:52:24.072894886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:24.074264 containerd[1503]: time="2026-04-17T23:52:24.074198751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:52:24.080174 containerd[1503]: time="2026-04-17T23:52:24.076797941Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:24.082883 containerd[1503]: time="2026-04-17T23:52:24.082817129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:24.084677 containerd[1503]: time="2026-04-17T23:52:24.084254666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.890711901s" Apr 17 23:52:24.084677 containerd[1503]: time="2026-04-17T23:52:24.084307029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:52:24.100629 containerd[1503]: time="2026-04-17T23:52:24.100325356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:52:24.132500 containerd[1503]: time="2026-04-17T23:52:24.132440599Z" level=info msg="CreateContainer within sandbox \"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:52:24.163226 containerd[1503]: time="2026-04-17T23:52:24.162781888Z" level=info msg="CreateContainer within sandbox \"68f4eab88771fb48007ce5faa47214f2b915dd604cef84d71c348b298c83f826\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f92989777fe68e5a8f3cedf0a8ebd43cd72d9a86d623aacdb54312877eb3349\"" Apr 17 23:52:24.164632 containerd[1503]: time="2026-04-17T23:52:24.164391331Z" level=info msg="StartContainer for \"3f92989777fe68e5a8f3cedf0a8ebd43cd72d9a86d623aacdb54312877eb3349\"" Apr 17 23:52:24.324474 systemd[1]: Started cri-containerd-3f92989777fe68e5a8f3cedf0a8ebd43cd72d9a86d623aacdb54312877eb3349.scope - libcontainer container 3f92989777fe68e5a8f3cedf0a8ebd43cd72d9a86d623aacdb54312877eb3349. Apr 17 23:52:24.418646 containerd[1503]: time="2026-04-17T23:52:24.418118834Z" level=info msg="StartContainer for \"3f92989777fe68e5a8f3cedf0a8ebd43cd72d9a86d623aacdb54312877eb3349\" returns successfully" Apr 17 23:52:24.481286 containerd[1503]: time="2026-04-17T23:52:24.481206875Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:24.484598 containerd[1503]: time="2026-04-17T23:52:24.484109024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:52:24.504220 containerd[1503]: time="2026-04-17T23:52:24.504058466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 403.671513ms" Apr 17 23:52:24.504511 containerd[1503]: time="2026-04-17T23:52:24.504481915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:52:24.507380 containerd[1503]: time="2026-04-17T23:52:24.507300885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:52:24.520966 containerd[1503]: time="2026-04-17T23:52:24.520896739Z" level=info msg="CreateContainer within sandbox \"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:52:24.554315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150282569.mount: Deactivated successfully. Apr 17 23:52:24.585167 containerd[1503]: time="2026-04-17T23:52:24.584763482Z" level=info msg="CreateContainer within sandbox \"8c06ba500cc54c6dede3dac60a2d1abfdeb5e0121695c5e0d47e94649ea9ba60\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0f3d54e54a1674b6a79092b96e97f2e8ee8796244d6566219d67b3fb531203b8\"" Apr 17 23:52:24.586772 containerd[1503]: time="2026-04-17T23:52:24.586738471Z" level=info msg="StartContainer for \"0f3d54e54a1674b6a79092b96e97f2e8ee8796244d6566219d67b3fb531203b8\"" Apr 17 23:52:24.652692 systemd[1]: Started cri-containerd-0f3d54e54a1674b6a79092b96e97f2e8ee8796244d6566219d67b3fb531203b8.scope - libcontainer container 0f3d54e54a1674b6a79092b96e97f2e8ee8796244d6566219d67b3fb531203b8. Apr 17 23:52:24.738504 containerd[1503]: time="2026-04-17T23:52:24.738445026Z" level=info msg="StartContainer for \"0f3d54e54a1674b6a79092b96e97f2e8ee8796244d6566219d67b3fb531203b8\" returns successfully" Apr 17 23:52:25.262497 kubelet[2665]: I0417 23:52:25.262393 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6545b6d484-khkvj" podStartSLOduration=31.703728248 podStartE2EDuration="49.262371879s" podCreationTimestamp="2026-04-17 23:51:36 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.948441648 +0000 UTC m=+53.445528088" lastFinishedPulling="2026-04-17 23:52:24.507085274 +0000 UTC m=+71.004171719" observedRunningTime="2026-04-17 23:52:25.256694692 +0000 UTC m=+71.753781145" watchObservedRunningTime="2026-04-17 23:52:25.262371879 +0000 UTC m=+71.759458331" Apr 17 23:52:25.302493 kubelet[2665]: I0417 23:52:25.301370 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6545b6d484-99zm2" podStartSLOduration=31.90721853 podStartE2EDuration="49.30134767s" podCreationTimestamp="2026-04-17 23:51:36 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.705697788 +0000 UTC m=+53.202784226" lastFinishedPulling="2026-04-17 23:52:24.099826935 +0000 UTC m=+70.596913366" observedRunningTime="2026-04-17 23:52:25.299907939 +0000 UTC m=+71.796994400" watchObservedRunningTime="2026-04-17 23:52:25.30134767 +0000 UTC m=+71.798434115" Apr 17 23:52:26.201841 kubelet[2665]: I0417 23:52:26.200626 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:52:26.201841 kubelet[2665]: I0417 23:52:26.200739 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:52:26.430578 containerd[1503]: time="2026-04-17T23:52:26.430504229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:26.437665 containerd[1503]: time="2026-04-17T23:52:26.434976395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:52:26.437665 containerd[1503]: time="2026-04-17T23:52:26.435588978Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:26.461538 containerd[1503]: time="2026-04-17T23:52:26.460695174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:26.468452 containerd[1503]: time="2026-04-17T23:52:26.468397350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.961046633s" Apr 17 23:52:26.468950 containerd[1503]: time="2026-04-17T23:52:26.468727525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:52:26.473968 containerd[1503]: time="2026-04-17T23:52:26.473913948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:52:26.489098 containerd[1503]: time="2026-04-17T23:52:26.488855867Z" level=info msg="CreateContainer within sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:52:26.536758 containerd[1503]: time="2026-04-17T23:52:26.536543914Z" level=info msg="CreateContainer within sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\"" Apr 17 23:52:26.540253 containerd[1503]: time="2026-04-17T23:52:26.538732873Z" level=info msg="StartContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\"" Apr 17 23:52:26.630792 systemd[1]: run-containerd-runc-k8s.io-d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46-runc.o1bS52.mount: Deactivated successfully. Apr 17 23:52:26.647489 systemd[1]: Started cri-containerd-d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46.scope - libcontainer container d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46. Apr 17 23:52:26.785200 containerd[1503]: time="2026-04-17T23:52:26.783622062Z" level=info msg="StartContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" returns successfully" Apr 17 23:52:28.043012 systemd[1]: Started sshd@9-10.244.23.222:22-4.175.71.9:45248.service - OpenSSH per-connection server daemon (4.175.71.9:45248). Apr 17 23:52:28.444191 sshd[5596]: Accepted publickey for core from 4.175.71.9 port 45248 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:28.453299 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:28.480300 systemd-logind[1486]: New session 12 of user core. Apr 17 23:52:28.484611 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:52:29.005813 containerd[1503]: time="2026-04-17T23:52:29.005742136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:29.007490 containerd[1503]: time="2026-04-17T23:52:29.006389844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:52:29.010441 containerd[1503]: time="2026-04-17T23:52:29.009470644Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:29.016611 containerd[1503]: time="2026-04-17T23:52:29.015652124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:29.020377 containerd[1503]: time="2026-04-17T23:52:29.017992929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.544023867s" Apr 17 23:52:29.022092 containerd[1503]: time="2026-04-17T23:52:29.021776669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:52:29.070757 containerd[1503]: time="2026-04-17T23:52:29.070705082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:52:29.532651 containerd[1503]: time="2026-04-17T23:52:29.532579824Z" level=info msg="CreateContainer within sandbox \"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:52:29.579066 containerd[1503]: time="2026-04-17T23:52:29.578995440Z" level=info msg="CreateContainer within sandbox \"60337956406376c413f240ef5375a1927b525c0c7a79fd701cd0313e2c87f78f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"35b7bb0cc72b28e3503f50dad070746276269dd6b7c6b1a2f4d50d929523b544\"" Apr 17 23:52:29.585890 containerd[1503]: time="2026-04-17T23:52:29.584440781Z" level=info msg="StartContainer for \"35b7bb0cc72b28e3503f50dad070746276269dd6b7c6b1a2f4d50d929523b544\"" Apr 17 23:52:29.586953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191262037.mount: Deactivated successfully. Apr 17 23:52:29.664285 sshd[5596]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:29.680038 systemd[1]: sshd@9-10.244.23.222:22-4.175.71.9:45248.service: Deactivated successfully. Apr 17 23:52:29.684091 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:52:29.687244 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:52:29.691005 systemd-logind[1486]: Removed session 12. Apr 17 23:52:29.749502 systemd[1]: Started cri-containerd-35b7bb0cc72b28e3503f50dad070746276269dd6b7c6b1a2f4d50d929523b544.scope - libcontainer container 35b7bb0cc72b28e3503f50dad070746276269dd6b7c6b1a2f4d50d929523b544. Apr 17 23:52:29.840812 containerd[1503]: time="2026-04-17T23:52:29.840660441Z" level=info msg="StartContainer for \"35b7bb0cc72b28e3503f50dad070746276269dd6b7c6b1a2f4d50d929523b544\" returns successfully" Apr 17 23:52:30.283589 kubelet[2665]: I0417 23:52:30.283409 2665 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:52:30.283589 kubelet[2665]: I0417 23:52:30.283480 2665 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:52:30.455053 kubelet[2665]: I0417 23:52:30.451872 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-wcb7j" podStartSLOduration=30.912692174 podStartE2EDuration="53.43064581s" podCreationTimestamp="2026-04-17 23:51:37 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.560810536 +0000 UTC m=+53.057896967" lastFinishedPulling="2026-04-17 23:52:29.078764151 +0000 UTC m=+75.575850603" observedRunningTime="2026-04-17 23:52:30.424768612 +0000 UTC m=+76.921855060" watchObservedRunningTime="2026-04-17 23:52:30.43064581 +0000 UTC m=+76.927732260" Apr 17 23:52:31.318793 kubelet[2665]: I0417 23:52:31.317747 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:52:32.187737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392545581.mount: Deactivated successfully. Apr 17 23:52:32.219817 containerd[1503]: time="2026-04-17T23:52:32.219616669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:32.224863 containerd[1503]: time="2026-04-17T23:52:32.224743837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:52:32.225861 containerd[1503]: time="2026-04-17T23:52:32.225713766Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:32.232206 containerd[1503]: time="2026-04-17T23:52:32.231859173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:32.234253 containerd[1503]: time="2026-04-17T23:52:32.234105748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.161477811s" Apr 17 23:52:32.234356 containerd[1503]: time="2026-04-17T23:52:32.234235402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:52:32.249020 containerd[1503]: time="2026-04-17T23:52:32.248961788Z" level=info msg="CreateContainer within sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:52:32.276753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350681013.mount: Deactivated successfully. Apr 17 23:52:32.284057 containerd[1503]: time="2026-04-17T23:52:32.283969243Z" level=info msg="CreateContainer within sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\"" Apr 17 23:52:32.285451 containerd[1503]: time="2026-04-17T23:52:32.285399764Z" level=info msg="StartContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\"" Apr 17 23:52:32.359553 systemd[1]: Started cri-containerd-1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed.scope - libcontainer container 1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed. Apr 17 23:52:32.440304 containerd[1503]: time="2026-04-17T23:52:32.440033312Z" level=info msg="StartContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" returns successfully" Apr 17 23:52:32.646199 containerd[1503]: time="2026-04-17T23:52:32.645281597Z" level=info msg="StopContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" with timeout 30 (s)" Apr 17 23:52:32.647543 containerd[1503]: time="2026-04-17T23:52:32.647511153Z" level=info msg="Stop container \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" with signal terminated" Apr 17 23:52:32.661353 containerd[1503]: time="2026-04-17T23:52:32.661305606Z" level=info msg="StopContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" with timeout 30 (s)" Apr 17 23:52:32.661862 containerd[1503]: time="2026-04-17T23:52:32.661826313Z" level=info msg="Stop container \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" with signal terminated" Apr 17 23:52:32.689032 systemd[1]: cri-containerd-1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed.scope: Deactivated successfully. Apr 17 23:52:32.725313 systemd[1]: cri-containerd-d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46.scope: Deactivated successfully. Apr 17 23:52:32.792113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46-rootfs.mount: Deactivated successfully. Apr 17 23:52:32.797578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed-rootfs.mount: Deactivated successfully. Apr 17 23:52:32.866279 containerd[1503]: time="2026-04-17T23:52:32.838495586Z" level=info msg="shim disconnected" id=d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46 namespace=k8s.io Apr 17 23:52:32.866961 containerd[1503]: time="2026-04-17T23:52:32.839065265Z" level=info msg="shim disconnected" id=1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed namespace=k8s.io Apr 17 23:52:32.874822 containerd[1503]: time="2026-04-17T23:52:32.874493501Z" level=warning msg="cleaning up after shim disconnected" id=1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed namespace=k8s.io Apr 17 23:52:32.874822 containerd[1503]: time="2026-04-17T23:52:32.874526449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:52:32.875976 containerd[1503]: time="2026-04-17T23:52:32.875903720Z" level=warning msg="cleaning up after shim disconnected" id=d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46 namespace=k8s.io Apr 17 23:52:32.875976 containerd[1503]: time="2026-04-17T23:52:32.875969860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:52:32.926516 containerd[1503]: time="2026-04-17T23:52:32.926303303Z" level=info msg="StopContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" returns successfully" Apr 17 23:52:32.927866 containerd[1503]: time="2026-04-17T23:52:32.927053171Z" level=info msg="StopContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" returns successfully" Apr 17 23:52:32.933984 containerd[1503]: time="2026-04-17T23:52:32.933915545Z" level=info msg="StopPodSandbox for \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\"" Apr 17 23:52:32.938723 containerd[1503]: time="2026-04-17T23:52:32.938619146Z" level=info msg="Container to stop \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:52:32.938723 containerd[1503]: time="2026-04-17T23:52:32.938668451Z" level=info msg="Container to stop \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:52:32.947617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec-shm.mount: Deactivated successfully. Apr 17 23:52:32.956275 systemd[1]: cri-containerd-98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec.scope: Deactivated successfully. Apr 17 23:52:33.000120 containerd[1503]: time="2026-04-17T23:52:32.999926094Z" level=info msg="shim disconnected" id=98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec namespace=k8s.io Apr 17 23:52:33.000120 containerd[1503]: time="2026-04-17T23:52:33.000022880Z" level=warning msg="cleaning up after shim disconnected" id=98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec namespace=k8s.io Apr 17 23:52:33.000120 containerd[1503]: time="2026-04-17T23:52:33.000039893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:52:33.003366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec-rootfs.mount: Deactivated successfully. Apr 17 23:52:33.332642 kubelet[2665]: I0417 23:52:33.331184 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6fdcf6b9f6-dqfx4" podStartSLOduration=24.065075701 podStartE2EDuration="49.331127019s" podCreationTimestamp="2026-04-17 23:51:44 +0000 UTC" firstStartedPulling="2026-04-17 23:52:06.969363075 +0000 UTC m=+53.466449512" lastFinishedPulling="2026-04-17 23:52:32.235414392 +0000 UTC m=+78.732500830" observedRunningTime="2026-04-17 23:52:32.492927611 +0000 UTC m=+78.990014071" watchObservedRunningTime="2026-04-17 23:52:33.331127019 +0000 UTC m=+79.828213477" Apr 17 23:52:33.337809 systemd-networkd[1433]: cali6a668c8560e: Link DOWN Apr 17 23:52:33.337823 systemd-networkd[1433]: cali6a668c8560e: Lost carrier Apr 17 23:52:33.520307 kubelet[2665]: I0417 23:52:33.516130 2665 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.327 [INFO][5824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.334 [INFO][5824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" iface="eth0" netns="/var/run/netns/cni-f5a35093-1cf8-232e-c41e-e673d934bfb2" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.335 [INFO][5824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" iface="eth0" netns="/var/run/netns/cni-f5a35093-1cf8-232e-c41e-e673d934bfb2" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.367 [INFO][5824] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" after=32.77658ms iface="eth0" netns="/var/run/netns/cni-f5a35093-1cf8-232e-c41e-e673d934bfb2" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.368 [INFO][5824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.368 [INFO][5824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.607 [INFO][5862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.610 [INFO][5862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.610 [INFO][5862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.682 [INFO][5862] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.682 [INFO][5862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.684 [INFO][5862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:33.692177 containerd[1503]: 2026-04-17 23:52:33.687 [INFO][5824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:52:33.699972 containerd[1503]: time="2026-04-17T23:52:33.692905694Z" level=info msg="TearDown network for sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" successfully" Apr 17 23:52:33.699972 containerd[1503]: time="2026-04-17T23:52:33.693776894Z" level=info msg="StopPodSandbox for \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" returns successfully" Apr 17 23:52:33.703622 systemd[1]: run-netns-cni\x2df5a35093\x2d1cf8\x2d232e\x2dc41e\x2de673d934bfb2.mount: Deactivated successfully. Apr 17 23:52:33.966351 systemd[1]: Created slice kubepods-besteffort-pod642fb620_adc1_404a_b43b_a30e9a0fb6f6.slice - libcontainer container kubepods-besteffort-pod642fb620_adc1_404a_b43b_a30e9a0fb6f6.slice. Apr 17 23:52:33.986190 kubelet[2665]: I0417 23:52:33.986122 2665 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-nginx-config\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-nginx-config\") pod \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " Apr 17 23:52:33.986527 kubelet[2665]: I0417 23:52:33.986490 2665 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-ca-bundle\") pod \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " Apr 17 23:52:33.986757 kubelet[2665]: I0417 23:52:33.986731 2665 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1254b0aa-66e4-4dd1-bb91-02f716eb390c-kube-api-access-crkr7\" (UniqueName: \"kubernetes.io/projected/1254b0aa-66e4-4dd1-bb91-02f716eb390c-kube-api-access-crkr7\") pod \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " Apr 17 23:52:33.987186 kubelet[2665]: I0417 23:52:33.987161 2665 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-backend-key-pair\") pod \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\" (UID: \"1254b0aa-66e4-4dd1-bb91-02f716eb390c\") " Apr 17 23:52:33.999165 kubelet[2665]: I0417 23:52:33.997253 2665 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-ca-bundle" pod "1254b0aa-66e4-4dd1-bb91-02f716eb390c" (UID: "1254b0aa-66e4-4dd1-bb91-02f716eb390c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:52:33.999308 kubelet[2665]: I0417 23:52:33.997260 2665 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-nginx-config" pod "1254b0aa-66e4-4dd1-bb91-02f716eb390c" (UID: "1254b0aa-66e4-4dd1-bb91-02f716eb390c"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:52:34.070166 kubelet[2665]: I0417 23:52:34.067939 2665 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1254b0aa-66e4-4dd1-bb91-02f716eb390c-kube-api-access-crkr7" pod "1254b0aa-66e4-4dd1-bb91-02f716eb390c" (UID: "1254b0aa-66e4-4dd1-bb91-02f716eb390c"). InnerVolumeSpecName "kube-api-access-crkr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:52:34.073710 kubelet[2665]: I0417 23:52:34.073674 2665 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-backend-key-pair" pod "1254b0aa-66e4-4dd1-bb91-02f716eb390c" (UID: "1254b0aa-66e4-4dd1-bb91-02f716eb390c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:52:34.074493 systemd[1]: var-lib-kubelet-pods-1254b0aa\x2d66e4\x2d4dd1\x2dbb91\x2d02f716eb390c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrkr7.mount: Deactivated successfully. Apr 17 23:52:34.074721 systemd[1]: var-lib-kubelet-pods-1254b0aa\x2d66e4\x2d4dd1\x2dbb91\x2d02f716eb390c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:52:34.088273 kubelet[2665]: I0417 23:52:34.088230 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/642fb620-adc1-404a-b43b-a30e9a0fb6f6-whisker-backend-key-pair\") pod \"whisker-74484697b9-7qdqg\" (UID: \"642fb620-adc1-404a-b43b-a30e9a0fb6f6\") " pod="calico-system/whisker-74484697b9-7qdqg" Apr 17 23:52:34.094643 kubelet[2665]: I0417 23:52:34.094608 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd484\" (UniqueName: \"kubernetes.io/projected/642fb620-adc1-404a-b43b-a30e9a0fb6f6-kube-api-access-pd484\") pod \"whisker-74484697b9-7qdqg\" (UID: \"642fb620-adc1-404a-b43b-a30e9a0fb6f6\") " pod="calico-system/whisker-74484697b9-7qdqg" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094662 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/642fb620-adc1-404a-b43b-a30e9a0fb6f6-nginx-config\") pod \"whisker-74484697b9-7qdqg\" (UID: \"642fb620-adc1-404a-b43b-a30e9a0fb6f6\") " pod="calico-system/whisker-74484697b9-7qdqg" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094693 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/642fb620-adc1-404a-b43b-a30e9a0fb6f6-whisker-ca-bundle\") pod \"whisker-74484697b9-7qdqg\" (UID: \"642fb620-adc1-404a-b43b-a30e9a0fb6f6\") " pod="calico-system/whisker-74484697b9-7qdqg" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094739 2665 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-nginx-config\") on node \"srv-mc367.gb1.brightbox.com\" DevicePath \"\"" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094759 2665 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crkr7\" (UniqueName: \"kubernetes.io/projected/1254b0aa-66e4-4dd1-bb91-02f716eb390c-kube-api-access-crkr7\") on node \"srv-mc367.gb1.brightbox.com\" DevicePath \"\"" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094784 2665 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-ca-bundle\") on node \"srv-mc367.gb1.brightbox.com\" DevicePath \"\"" Apr 17 23:52:34.095109 kubelet[2665]: I0417 23:52:34.094854 2665 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1254b0aa-66e4-4dd1-bb91-02f716eb390c-whisker-backend-key-pair\") on node \"srv-mc367.gb1.brightbox.com\" DevicePath \"\"" Apr 17 23:52:34.293009 containerd[1503]: time="2026-04-17T23:52:34.292816601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74484697b9-7qdqg,Uid:642fb620-adc1-404a-b43b-a30e9a0fb6f6,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:34.539481 systemd[1]: Removed slice kubepods-besteffort-pod1254b0aa_66e4_4dd1_bb91_02f716eb390c.slice - libcontainer container kubepods-besteffort-pod1254b0aa_66e4_4dd1_bb91_02f716eb390c.slice. Apr 17 23:52:34.636984 systemd-networkd[1433]: cali66f23b1d090: Link UP Apr 17 23:52:34.637353 systemd-networkd[1433]: cali66f23b1d090: Gained carrier Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.479 [INFO][5891] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0 whisker-74484697b9- calico-system 642fb620-adc1-404a-b43b-a30e9a0fb6f6 1190 0 2026-04-17 23:52:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74484697b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-mc367.gb1.brightbox.com whisker-74484697b9-7qdqg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali66f23b1d090 [] [] }} ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.479 [INFO][5891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.544 [INFO][5903] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" HandleID="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.556 [INFO][5903] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" HandleID="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef510), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mc367.gb1.brightbox.com", "pod":"whisker-74484697b9-7qdqg", "timestamp":"2026-04-17 23:52:34.544212677 +0000 UTC"}, Hostname:"srv-mc367.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004fcf20)} Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.556 [INFO][5903] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.556 [INFO][5903] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.556 [INFO][5903] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mc367.gb1.brightbox.com' Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.560 [INFO][5903] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.579 [INFO][5903] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.589 [INFO][5903] ipam/ipam.go 526: Trying affinity for 192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.592 [INFO][5903] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.596 [INFO][5903] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.596 [INFO][5903] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.599 [INFO][5903] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.607 [INFO][5903] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.624 [INFO][5903] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.9/26] block=192.168.91.0/26 handle="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.624 [INFO][5903] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.9/26] handle="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" host="srv-mc367.gb1.brightbox.com" Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.624 [INFO][5903] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:34.692188 containerd[1503]: 2026-04-17 23:52:34.625 [INFO][5903] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.9/26] IPv6=[] ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" HandleID="k8s-pod-network.43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.630 [INFO][5891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0", GenerateName:"whisker-74484697b9-", Namespace:"calico-system", SelfLink:"", UID:"642fb620-adc1-404a-b43b-a30e9a0fb6f6", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74484697b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"", Pod:"whisker-74484697b9-7qdqg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66f23b1d090", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.630 [INFO][5891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.9/32] ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.630 [INFO][5891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66f23b1d090 ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.645 [INFO][5891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.650 [INFO][5891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0", GenerateName:"whisker-74484697b9-", Namespace:"calico-system", SelfLink:"", UID:"642fb620-adc1-404a-b43b-a30e9a0fb6f6", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74484697b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c", Pod:"whisker-74484697b9-7qdqg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66f23b1d090", MAC:"0a:e0:65:11:55:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:34.698395 containerd[1503]: 2026-04-17 23:52:34.674 [INFO][5891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c" Namespace="calico-system" Pod="whisker-74484697b9-7qdqg" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--74484697b9--7qdqg-eth0" Apr 17 23:52:34.703108 systemd[1]: Started sshd@10-10.244.23.222:22-4.175.71.9:45262.service - OpenSSH per-connection server daemon (4.175.71.9:45262). Apr 17 23:52:34.879184 containerd[1503]: time="2026-04-17T23:52:34.878996861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:34.880085 containerd[1503]: time="2026-04-17T23:52:34.880000869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:34.880214 containerd[1503]: time="2026-04-17T23:52:34.880082920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:34.880680 containerd[1503]: time="2026-04-17T23:52:34.880312966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:34.964464 systemd[1]: Started cri-containerd-43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c.scope - libcontainer container 43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c. Apr 17 23:52:34.974031 sshd[5913]: Accepted publickey for core from 4.175.71.9 port 45262 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:34.980885 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:34.998238 systemd-logind[1486]: New session 13 of user core. Apr 17 23:52:35.005524 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:52:35.114165 containerd[1503]: time="2026-04-17T23:52:35.113939580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74484697b9-7qdqg,Uid:642fb620-adc1-404a-b43b-a30e9a0fb6f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c\"" Apr 17 23:52:35.134090 containerd[1503]: time="2026-04-17T23:52:35.133786502Z" level=info msg="CreateContainer within sandbox \"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:52:35.188525 containerd[1503]: time="2026-04-17T23:52:35.188471249Z" level=info msg="CreateContainer within sandbox \"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ea690778bfa73f41050ee78629264c0bb484c4651db90d80220921f28346cd84\"" Apr 17 23:52:35.192206 containerd[1503]: time="2026-04-17T23:52:35.191973408Z" level=info msg="StartContainer for \"ea690778bfa73f41050ee78629264c0bb484c4651db90d80220921f28346cd84\"" Apr 17 23:52:35.277659 systemd[1]: Started cri-containerd-ea690778bfa73f41050ee78629264c0bb484c4651db90d80220921f28346cd84.scope - libcontainer container ea690778bfa73f41050ee78629264c0bb484c4651db90d80220921f28346cd84. Apr 17 23:52:35.392803 containerd[1503]: time="2026-04-17T23:52:35.392661355Z" level=info msg="StartContainer for \"ea690778bfa73f41050ee78629264c0bb484c4651db90d80220921f28346cd84\" returns successfully" Apr 17 23:52:35.409905 containerd[1503]: time="2026-04-17T23:52:35.408925263Z" level=info msg="CreateContainer within sandbox \"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:52:35.466062 containerd[1503]: time="2026-04-17T23:52:35.464074209Z" level=info msg="CreateContainer within sandbox \"43b1e045009cd4020d339ff8d794178cc3f55ba706603a9b4359691d6483084c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3a9e639f262e984e11c39f0c54685dc5bff884b968d58cf5fe5079312c1a9cfd\"" Apr 17 23:52:35.468466 containerd[1503]: time="2026-04-17T23:52:35.467207405Z" level=info msg="StartContainer for \"3a9e639f262e984e11c39f0c54685dc5bff884b968d58cf5fe5079312c1a9cfd\"" Apr 17 23:52:35.625833 systemd[1]: Started cri-containerd-3a9e639f262e984e11c39f0c54685dc5bff884b968d58cf5fe5079312c1a9cfd.scope - libcontainer container 3a9e639f262e984e11c39f0c54685dc5bff884b968d58cf5fe5079312c1a9cfd. Apr 17 23:52:35.768987 kubelet[2665]: I0417 23:52:35.768919 2665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1254b0aa-66e4-4dd1-bb91-02f716eb390c" path="/var/lib/kubelet/pods/1254b0aa-66e4-4dd1-bb91-02f716eb390c/volumes" Apr 17 23:52:35.854716 containerd[1503]: time="2026-04-17T23:52:35.854583631Z" level=info msg="StartContainer for \"3a9e639f262e984e11c39f0c54685dc5bff884b968d58cf5fe5079312c1a9cfd\" returns successfully" Apr 17 23:52:36.033265 sshd[5913]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:36.049083 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:52:36.049528 systemd[1]: sshd@10-10.244.23.222:22-4.175.71.9:45262.service: Deactivated successfully. Apr 17 23:52:36.052991 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:52:36.057087 systemd-logind[1486]: Removed session 13. Apr 17 23:52:36.626094 kubelet[2665]: I0417 23:52:36.625974 2665 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-74484697b9-7qdqg" podStartSLOduration=3.625952522 podStartE2EDuration="3.625952522s" podCreationTimestamp="2026-04-17 23:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:52:36.623864939 +0000 UTC m=+83.120951389" watchObservedRunningTime="2026-04-17 23:52:36.625952522 +0000 UTC m=+83.123038966" Apr 17 23:52:36.669404 systemd-networkd[1433]: cali66f23b1d090: Gained IPv6LL Apr 17 23:52:41.071511 systemd[1]: Started sshd@11-10.244.23.222:22-4.175.71.9:40152.service - OpenSSH per-connection server daemon (4.175.71.9:40152). Apr 17 23:52:41.306695 sshd[6098]: Accepted publickey for core from 4.175.71.9 port 40152 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:41.311011 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:41.320220 systemd-logind[1486]: New session 14 of user core. Apr 17 23:52:41.326456 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:52:41.905551 sshd[6098]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:41.909789 systemd[1]: sshd@11-10.244.23.222:22-4.175.71.9:40152.service: Deactivated successfully. Apr 17 23:52:41.912530 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:52:41.917217 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:52:41.919621 systemd-logind[1486]: Removed session 14. Apr 17 23:52:46.943563 systemd[1]: Started sshd@12-10.244.23.222:22-4.175.71.9:51434.service - OpenSSH per-connection server daemon (4.175.71.9:51434). Apr 17 23:52:47.103211 sshd[6137]: Accepted publickey for core from 4.175.71.9 port 51434 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:47.107585 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:47.117874 systemd-logind[1486]: New session 15 of user core. Apr 17 23:52:47.123341 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:52:47.504649 sshd[6137]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:47.511827 systemd[1]: sshd@12-10.244.23.222:22-4.175.71.9:51434.service: Deactivated successfully. Apr 17 23:52:47.516682 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:52:47.518409 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:52:47.520762 systemd-logind[1486]: Removed session 15. Apr 17 23:52:52.542830 systemd[1]: Started sshd@13-10.244.23.222:22-4.175.71.9:51436.service - OpenSSH per-connection server daemon (4.175.71.9:51436). Apr 17 23:52:52.802882 sshd[6227]: Accepted publickey for core from 4.175.71.9 port 51436 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:52.808354 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:52.817975 systemd-logind[1486]: New session 16 of user core. Apr 17 23:52:52.827471 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:52:53.497532 sshd[6227]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:53.502026 systemd[1]: sshd@13-10.244.23.222:22-4.175.71.9:51436.service: Deactivated successfully. Apr 17 23:52:53.506285 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:52:53.508469 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:52:53.510770 systemd-logind[1486]: Removed session 16. Apr 17 23:52:53.527496 systemd[1]: Started sshd@14-10.244.23.222:22-4.175.71.9:51446.service - OpenSSH per-connection server daemon (4.175.71.9:51446). Apr 17 23:52:53.679729 sshd[6241]: Accepted publickey for core from 4.175.71.9 port 51446 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:53.681057 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:53.692039 systemd-logind[1486]: New session 17 of user core. Apr 17 23:52:53.696397 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:52:54.027852 sshd[6241]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:54.051818 systemd[1]: sshd@14-10.244.23.222:22-4.175.71.9:51446.service: Deactivated successfully. Apr 17 23:52:54.056492 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:52:54.060230 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:52:54.066585 systemd[1]: Started sshd@15-10.244.23.222:22-4.175.71.9:51456.service - OpenSSH per-connection server daemon (4.175.71.9:51456). Apr 17 23:52:54.071727 systemd-logind[1486]: Removed session 17. Apr 17 23:52:54.204254 sshd[6252]: Accepted publickey for core from 4.175.71.9 port 51456 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:54.206523 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:54.215283 systemd-logind[1486]: New session 18 of user core. Apr 17 23:52:54.222414 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:52:54.432823 sshd[6252]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:54.438620 systemd[1]: sshd@15-10.244.23.222:22-4.175.71.9:51456.service: Deactivated successfully. Apr 17 23:52:54.441973 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:52:54.443888 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:52:54.445935 systemd-logind[1486]: Removed session 18. Apr 17 23:52:59.468652 systemd[1]: Started sshd@16-10.244.23.222:22-4.175.71.9:60508.service - OpenSSH per-connection server daemon (4.175.71.9:60508). Apr 17 23:52:59.620178 sshd[6271]: Accepted publickey for core from 4.175.71.9 port 60508 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:52:59.622172 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:59.630516 systemd-logind[1486]: New session 19 of user core. Apr 17 23:52:59.636347 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:52:59.913033 sshd[6271]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:59.919207 systemd[1]: sshd@16-10.244.23.222:22-4.175.71.9:60508.service: Deactivated successfully. Apr 17 23:52:59.922675 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:52:59.925184 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:52:59.927119 systemd-logind[1486]: Removed session 19. Apr 17 23:53:01.126091 kubelet[2665]: I0417 23:53:01.125681 2665 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:53:04.946524 systemd[1]: Started sshd@17-10.244.23.222:22-4.175.71.9:60520.service - OpenSSH per-connection server daemon (4.175.71.9:60520). Apr 17 23:53:05.098183 sshd[6288]: Accepted publickey for core from 4.175.71.9 port 60520 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:05.100121 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:05.110568 systemd-logind[1486]: New session 20 of user core. Apr 17 23:53:05.116393 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:53:05.518782 sshd[6288]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:05.526409 systemd[1]: sshd@17-10.244.23.222:22-4.175.71.9:60520.service: Deactivated successfully. Apr 17 23:53:05.531878 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:53:05.534881 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:53:05.570783 systemd[1]: Started sshd@18-10.244.23.222:22-4.175.71.9:51784.service - OpenSSH per-connection server daemon (4.175.71.9:51784). Apr 17 23:53:05.579095 systemd-logind[1486]: Removed session 20. Apr 17 23:53:05.793488 sshd[6324]: Accepted publickey for core from 4.175.71.9 port 51784 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:05.796124 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:05.803295 systemd-logind[1486]: New session 21 of user core. Apr 17 23:53:05.809407 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:53:06.425068 sshd[6324]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:06.431454 systemd[1]: sshd@18-10.244.23.222:22-4.175.71.9:51784.service: Deactivated successfully. Apr 17 23:53:06.434556 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:53:06.436424 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:53:06.447824 systemd-logind[1486]: Removed session 21. Apr 17 23:53:06.450066 systemd[1]: Started sshd@19-10.244.23.222:22-4.175.71.9:51788.service - OpenSSH per-connection server daemon (4.175.71.9:51788). Apr 17 23:53:06.619037 sshd[6335]: Accepted publickey for core from 4.175.71.9 port 51788 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:06.621944 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:06.629291 systemd-logind[1486]: New session 22 of user core. Apr 17 23:53:06.635416 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:53:07.742952 sshd[6335]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:07.755127 systemd[1]: sshd@19-10.244.23.222:22-4.175.71.9:51788.service: Deactivated successfully. Apr 17 23:53:07.762763 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:53:07.782939 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:53:07.792430 systemd[1]: Started sshd@20-10.244.23.222:22-4.175.71.9:51794.service - OpenSSH per-connection server daemon (4.175.71.9:51794). Apr 17 23:53:07.799755 systemd-logind[1486]: Removed session 22. Apr 17 23:53:07.979080 sshd[6358]: Accepted publickey for core from 4.175.71.9 port 51794 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:07.981216 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:07.988225 systemd-logind[1486]: New session 23 of user core. Apr 17 23:53:07.996374 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:53:09.138062 sshd[6358]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:09.163536 systemd[1]: sshd@20-10.244.23.222:22-4.175.71.9:51794.service: Deactivated successfully. Apr 17 23:53:09.167730 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:53:09.171300 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:53:09.181019 systemd[1]: Started sshd@21-10.244.23.222:22-4.175.71.9:51804.service - OpenSSH per-connection server daemon (4.175.71.9:51804). Apr 17 23:53:09.182301 systemd-logind[1486]: Removed session 23. Apr 17 23:53:09.359636 sshd[6375]: Accepted publickey for core from 4.175.71.9 port 51804 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:09.365410 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:09.375213 systemd-logind[1486]: New session 24 of user core. Apr 17 23:53:09.384376 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:53:09.696674 sshd[6375]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:09.703248 systemd[1]: sshd@21-10.244.23.222:22-4.175.71.9:51804.service: Deactivated successfully. Apr 17 23:53:09.706721 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:53:09.707821 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:53:09.710379 systemd-logind[1486]: Removed session 24. Apr 17 23:53:14.733548 systemd[1]: Started sshd@22-10.244.23.222:22-4.175.71.9:51812.service - OpenSSH per-connection server daemon (4.175.71.9:51812). Apr 17 23:53:14.928258 sshd[6407]: Accepted publickey for core from 4.175.71.9 port 51812 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:14.931180 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:14.943208 systemd-logind[1486]: New session 25 of user core. Apr 17 23:53:14.953414 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:53:15.298860 sshd[6407]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:15.303863 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:53:15.305383 systemd[1]: sshd@22-10.244.23.222:22-4.175.71.9:51812.service: Deactivated successfully. Apr 17 23:53:15.308237 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:53:15.309692 systemd-logind[1486]: Removed session 25. Apr 17 23:53:17.756776 kubelet[2665]: I0417 23:53:17.755876 2665 scope.go:122] "RemoveContainer" containerID="1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed" Apr 17 23:53:17.923803 containerd[1503]: time="2026-04-17T23:53:17.915939654Z" level=info msg="RemoveContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\"" Apr 17 23:53:18.028024 containerd[1503]: time="2026-04-17T23:53:18.027789452Z" level=info msg="RemoveContainer for \"1deae1c544c1fd6071d28f07033acf4e984f06b2d77cc821a1b4c269477916ed\" returns successfully" Apr 17 23:53:18.041794 kubelet[2665]: I0417 23:53:18.041725 2665 scope.go:122] "RemoveContainer" containerID="d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46" Apr 17 23:53:18.044566 containerd[1503]: time="2026-04-17T23:53:18.044499312Z" level=info msg="RemoveContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\"" Apr 17 23:53:18.052248 containerd[1503]: time="2026-04-17T23:53:18.052185018Z" level=info msg="RemoveContainer for \"d7563adf41cee53cd8fede25933ff6fd7aa8a137cec3c4cf443138fb2241ad46\" returns successfully" Apr 17 23:53:18.055316 containerd[1503]: time="2026-04-17T23:53:18.055012515Z" level=info msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.437 [WARNING][6455] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3f508cf3-d08e-41ad-9a03-91094ecb3e48", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835", Pod:"coredns-7d764666f9-49g4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48ff71350e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.441 [INFO][6455] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.441 [INFO][6455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" iface="eth0" netns="" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.441 [INFO][6455] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.441 [INFO][6455] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.698 [INFO][6462] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.701 [INFO][6462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.702 [INFO][6462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.719 [WARNING][6462] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.719 [INFO][6462] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.722 [INFO][6462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:53:18.730404 containerd[1503]: 2026-04-17 23:53:18.727 [INFO][6455] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.739395 containerd[1503]: time="2026-04-17T23:53:18.736368016Z" level=info msg="TearDown network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" successfully" Apr 17 23:53:18.739395 containerd[1503]: time="2026-04-17T23:53:18.736483765Z" level=info msg="StopPodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" returns successfully" Apr 17 23:53:18.752036 containerd[1503]: time="2026-04-17T23:53:18.751923617Z" level=info msg="RemovePodSandbox for \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" Apr 17 23:53:18.755798 containerd[1503]: time="2026-04-17T23:53:18.755722127Z" level=info msg="Forcibly stopping sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\"" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.917 [WARNING][6477] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3f508cf3-d08e-41ad-9a03-91094ecb3e48", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mc367.gb1.brightbox.com", ContainerID:"a8c78e3ee65ccbec48403f038d20bebd271d968005c32233952b4d04b6d09835", Pod:"coredns-7d764666f9-49g4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48ff71350e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.917 [INFO][6477] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.917 [INFO][6477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" iface="eth0" netns="" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.917 [INFO][6477] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.917 [INFO][6477] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.967 [INFO][6484] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.967 [INFO][6484] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.967 [INFO][6484] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.978 [WARNING][6484] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.978 [INFO][6484] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" HandleID="k8s-pod-network.daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Workload="srv--mc367.gb1.brightbox.com-k8s-coredns--7d764666f9--49g4d-eth0" Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.980 [INFO][6484] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:53:18.990402 containerd[1503]: 2026-04-17 23:53:18.985 [INFO][6477] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98" Apr 17 23:53:18.990402 containerd[1503]: time="2026-04-17T23:53:18.989577574Z" level=info msg="TearDown network for sandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" successfully" Apr 17 23:53:19.019941 containerd[1503]: time="2026-04-17T23:53:19.019836863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:53:19.020581 containerd[1503]: time="2026-04-17T23:53:19.020022559Z" level=info msg="RemovePodSandbox \"daad628be284f2233c55f382b043ff5ccdd56e3d46328c886695b3c5634c0f98\" returns successfully" Apr 17 23:53:19.021542 containerd[1503]: time="2026-04-17T23:53:19.020970490Z" level=info msg="StopPodSandbox for \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\"" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.079 [WARNING][6499] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.079 [INFO][6499] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.079 [INFO][6499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" iface="eth0" netns="" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.079 [INFO][6499] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.079 [INFO][6499] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.134 [INFO][6506] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.134 [INFO][6506] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.134 [INFO][6506] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.145 [WARNING][6506] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.146 [INFO][6506] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.150 [INFO][6506] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:53:19.157263 containerd[1503]: 2026-04-17 23:53:19.154 [INFO][6499] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.159909 containerd[1503]: time="2026-04-17T23:53:19.157981422Z" level=info msg="TearDown network for sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" successfully" Apr 17 23:53:19.159909 containerd[1503]: time="2026-04-17T23:53:19.158047916Z" level=info msg="StopPodSandbox for \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" returns successfully" Apr 17 23:53:19.160696 containerd[1503]: time="2026-04-17T23:53:19.160258860Z" level=info msg="RemovePodSandbox for \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\"" Apr 17 23:53:19.160696 containerd[1503]: time="2026-04-17T23:53:19.160357107Z" level=info msg="Forcibly stopping sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\"" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.236 [WARNING][6521] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" WorkloadEndpoint="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.237 [INFO][6521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.237 [INFO][6521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" iface="eth0" netns="" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.237 [INFO][6521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.237 [INFO][6521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.279 [INFO][6528] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.280 [INFO][6528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.280 [INFO][6528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.291 [WARNING][6528] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.291 [INFO][6528] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" HandleID="k8s-pod-network.98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Workload="srv--mc367.gb1.brightbox.com-k8s-whisker--6fdcf6b9f6--dqfx4-eth0" Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.294 [INFO][6528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:53:19.300356 containerd[1503]: 2026-04-17 23:53:19.297 [INFO][6521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec" Apr 17 23:53:19.300356 containerd[1503]: time="2026-04-17T23:53:19.300316792Z" level=info msg="TearDown network for sandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" successfully" Apr 17 23:53:19.308202 containerd[1503]: time="2026-04-17T23:53:19.307856877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:53:19.308202 containerd[1503]: time="2026-04-17T23:53:19.307966330Z" level=info msg="RemovePodSandbox \"98d99af23a8e99cded46d951a8bd491dfe8bda2ceae0c9c511386090bc2ed7ec\" returns successfully" Apr 17 23:53:20.199463 systemd[1]: run-containerd-runc-k8s.io-2b02d980a5af7bd902d16e45ee058edc209a6639e06b8d613f19f54c295bcd14-runc.nggUEP.mount: Deactivated successfully. Apr 17 23:53:20.351317 systemd[1]: Started sshd@23-10.244.23.222:22-4.175.71.9:34960.service - OpenSSH per-connection server daemon (4.175.71.9:34960). Apr 17 23:53:20.583509 sshd[6557]: Accepted publickey for core from 4.175.71.9 port 34960 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:20.587197 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:20.596798 systemd-logind[1486]: New session 26 of user core. Apr 17 23:53:20.608473 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 23:53:21.439985 sshd[6557]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:21.448126 systemd[1]: sshd@23-10.244.23.222:22-4.175.71.9:34960.service: Deactivated successfully. Apr 17 23:53:21.450841 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 23:53:21.452060 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Apr 17 23:53:21.454271 systemd-logind[1486]: Removed session 26. Apr 17 23:53:26.474510 systemd[1]: Started sshd@24-10.244.23.222:22-4.175.71.9:45642.service - OpenSSH per-connection server daemon (4.175.71.9:45642). Apr 17 23:53:26.651969 sshd[6570]: Accepted publickey for core from 4.175.71.9 port 45642 ssh2: RSA SHA256:whbN8rz0V69lTASVYUI8hp7QVnV+OlGZN00Yaq8px5s Apr 17 23:53:26.655202 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:53:26.663880 systemd-logind[1486]: New session 27 of user core. Apr 17 23:53:26.670369 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 17 23:53:27.132154 sshd[6570]: pam_unix(sshd:session): session closed for user core Apr 17 23:53:27.139168 systemd[1]: sshd@24-10.244.23.222:22-4.175.71.9:45642.service: Deactivated successfully. Apr 17 23:53:27.144651 systemd[1]: session-27.scope: Deactivated successfully. Apr 17 23:53:27.147441 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. Apr 17 23:53:27.149006 systemd-logind[1486]: Removed session 27.