Apr 28 02:47:28.035794 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 02:47:28.035830 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:47:28.035845 kernel: BIOS-provided physical RAM map: Apr 28 02:47:28.035861 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 02:47:28.035872 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 02:47:28.035882 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 02:47:28.035894 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 28 02:47:28.035905 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 28 02:47:28.035916 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 02:47:28.035939 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 02:47:28.035949 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 02:47:28.035960 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 02:47:28.035975 kernel: NX (Execute Disable) protection: active Apr 28 02:47:28.035986 kernel: APIC: Static calls initialized Apr 28 02:47:28.035999 kernel: SMBIOS 2.8 present. Apr 28 02:47:28.036011 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Apr 28 02:47:28.036022 kernel: Hypervisor detected: KVM Apr 28 02:47:28.036038 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 02:47:28.036049 kernel: kvm-clock: using sched offset of 4668402596 cycles Apr 28 02:47:28.036062 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 02:47:28.036073 kernel: tsc: Detected 2499.998 MHz processor Apr 28 02:47:28.036085 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 02:47:28.036108 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 02:47:28.036120 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 28 02:47:28.036132 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 02:47:28.036153 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 02:47:28.036174 kernel: Using GB pages for direct mapping Apr 28 02:47:28.036186 kernel: ACPI: Early table checksum verification disabled Apr 28 02:47:28.036198 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Apr 28 02:47:28.036210 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036222 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036234 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036246 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 28 02:47:28.036258 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036269 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036286 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036299 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 02:47:28.036311 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 28 02:47:28.036322 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 28 02:47:28.036335 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 28 02:47:28.036353 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 28 02:47:28.036366 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 28 02:47:28.036383 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 28 02:47:28.036396 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 28 02:47:28.036408 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 28 02:47:28.036420 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 28 02:47:28.036445 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 28 02:47:28.036457 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 28 02:47:28.036469 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 28 02:47:28.036485 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 28 02:47:28.036498 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 28 02:47:28.036509 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 28 02:47:28.036521 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 28 02:47:28.036533 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 28 02:47:28.036557 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 28 02:47:28.036568 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 28 02:47:28.036580 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 28 02:47:28.036591 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 28 02:47:28.036615 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 28 02:47:28.036630 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 28 02:47:28.036642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 28 02:47:28.036653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 28 02:47:28.036762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 28 02:47:28.036777 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 28 02:47:28.036802 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 28 02:47:28.036815 kernel: Zone ranges: Apr 28 02:47:28.036827 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 02:47:28.036840 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 28 02:47:28.036858 kernel: Normal empty Apr 28 02:47:28.036871 kernel: Movable zone start for each node Apr 28 02:47:28.036884 kernel: Early memory node ranges Apr 28 02:47:28.036896 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 02:47:28.036908 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 28 02:47:28.036921 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 28 02:47:28.036933 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 02:47:28.036946 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 02:47:28.036958 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 28 02:47:28.036970 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 02:47:28.036988 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 02:47:28.037000 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 02:47:28.037013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 02:47:28.037025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 02:47:28.037037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 02:47:28.037050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 02:47:28.037062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 02:47:28.037074 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 02:47:28.037087 kernel: TSC deadline timer available Apr 28 02:47:28.037105 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 28 02:47:28.037117 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 02:47:28.037129 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 02:47:28.037142 kernel: Booting paravirtualized kernel on KVM Apr 28 02:47:28.037170 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 02:47:28.037183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 28 02:47:28.037195 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Apr 28 02:47:28.037208 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Apr 28 02:47:28.037220 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 28 02:47:28.037238 kernel: kvm-guest: PV spinlocks enabled Apr 28 02:47:28.037251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 02:47:28.037265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:47:28.037278 kernel: random: crng init done Apr 28 02:47:28.037290 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 02:47:28.037303 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 28 02:47:28.037316 kernel: Fallback order for Node 0: 0 Apr 28 02:47:28.037328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 28 02:47:28.037346 kernel: Policy zone: DMA32 Apr 28 02:47:28.037359 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 02:47:28.037371 kernel: software IO TLB: area num 16. Apr 28 02:47:28.037384 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194764K reserved, 0K cma-reserved) Apr 28 02:47:28.037397 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 28 02:47:28.037409 kernel: Kernel/User page tables isolation: enabled Apr 28 02:47:28.037450 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 02:47:28.037464 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 02:47:28.037476 kernel: Dynamic Preempt: voluntary Apr 28 02:47:28.037495 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 02:47:28.037509 kernel: rcu: RCU event tracing is enabled. Apr 28 02:47:28.037522 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 28 02:47:28.037534 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 02:47:28.037547 kernel: Rude variant of Tasks RCU enabled. Apr 28 02:47:28.037572 kernel: Tracing variant of Tasks RCU enabled. Apr 28 02:47:28.037590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 02:47:28.037603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 28 02:47:28.037639 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 28 02:47:28.037653 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 02:47:28.037666 kernel: Console: colour VGA+ 80x25 Apr 28 02:47:28.037679 kernel: printk: console [tty0] enabled Apr 28 02:47:28.037699 kernel: printk: console [ttyS0] enabled Apr 28 02:47:28.037712 kernel: ACPI: Core revision 20230628 Apr 28 02:47:28.037725 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 02:47:28.037738 kernel: x2apic enabled Apr 28 02:47:28.037760 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 02:47:28.037779 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 28 02:47:28.037792 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 28 02:47:28.037806 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 02:47:28.037819 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 02:47:28.037832 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 02:47:28.037845 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 02:47:28.037858 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 02:47:28.037871 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 02:47:28.037884 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 28 02:47:28.037897 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 28 02:47:28.037915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 28 02:47:28.037928 kernel: MDS: Mitigation: Clear CPU buffers Apr 28 02:47:28.037941 kernel: MMIO Stale Data: Unknown: No mitigations Apr 28 02:47:28.037954 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 28 02:47:28.037967 kernel: active return thunk: its_return_thunk Apr 28 02:47:28.037979 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 02:47:28.037993 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 02:47:28.038006 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 02:47:28.038019 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 02:47:28.038032 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 02:47:28.038045 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 28 02:47:28.038063 kernel: Freeing SMP alternatives memory: 32K Apr 28 02:47:28.038076 kernel: pid_max: default: 32768 minimum: 301 Apr 28 02:47:28.038089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 02:47:28.038102 kernel: landlock: Up and running. Apr 28 02:47:28.038115 kernel: SELinux: Initializing. Apr 28 02:47:28.038128 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 28 02:47:28.038141 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 28 02:47:28.038166 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 28 02:47:28.038179 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 28 02:47:28.038193 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 28 02:47:28.038212 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 28 02:47:28.038226 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 28 02:47:28.038239 kernel: signal: max sigframe size: 1776 Apr 28 02:47:28.038252 kernel: rcu: Hierarchical SRCU implementation. Apr 28 02:47:28.038266 kernel: rcu: Max phase no-delay instances is 400. Apr 28 02:47:28.038279 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 02:47:28.038292 kernel: smp: Bringing up secondary CPUs ... Apr 28 02:47:28.038305 kernel: smpboot: x86: Booting SMP configuration: Apr 28 02:47:28.038318 kernel: .... node #0, CPUs: #1 Apr 28 02:47:28.038336 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 28 02:47:28.038349 kernel: smp: Brought up 1 node, 2 CPUs Apr 28 02:47:28.038362 kernel: smpboot: Max logical packages: 16 Apr 28 02:47:28.038375 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 28 02:47:28.038389 kernel: devtmpfs: initialized Apr 28 02:47:28.038402 kernel: x86/mm: Memory block size: 128MB Apr 28 02:47:28.038415 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 02:47:28.038428 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 28 02:47:28.038441 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 02:47:28.038455 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 02:47:28.038473 kernel: audit: initializing netlink subsys (disabled) Apr 28 02:47:28.038486 kernel: audit: type=2000 audit(1777344446.319:1): state=initialized audit_enabled=0 res=1 Apr 28 02:47:28.038500 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 02:47:28.038513 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 02:47:28.038526 kernel: cpuidle: using governor menu Apr 28 02:47:28.038539 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 02:47:28.038552 kernel: dca service started, version 1.12.1 Apr 28 02:47:28.038565 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 02:47:28.038583 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 02:47:28.038597 kernel: PCI: Using configuration type 1 for base access Apr 28 02:47:28.038629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 02:47:28.038645 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 02:47:28.038658 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 02:47:28.038672 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 02:47:28.038685 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 02:47:28.038698 kernel: ACPI: Added _OSI(Module Device) Apr 28 02:47:28.038711 kernel: ACPI: Added _OSI(Processor Device) Apr 28 02:47:28.038730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 02:47:28.038744 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 02:47:28.038757 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 02:47:28.038770 kernel: ACPI: Interpreter enabled Apr 28 02:47:28.038783 kernel: ACPI: PM: (supports S0 S5) Apr 28 02:47:28.038808 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 02:47:28.038821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 02:47:28.038833 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 02:47:28.038846 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 02:47:28.038872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 02:47:28.039191 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 02:47:28.039387 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 28 02:47:28.039583 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 28 02:47:28.039604 kernel: PCI host bridge to bus 0000:00 Apr 28 02:47:28.039827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 02:47:28.039993 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 02:47:28.040180 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 02:47:28.040343 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 28 02:47:28.040504 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 02:47:28.043719 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 28 02:47:28.043899 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 02:47:28.044129 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 02:47:28.044371 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 28 02:47:28.044566 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 28 02:47:28.046431 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 28 02:47:28.046638 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 28 02:47:28.046820 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 02:47:28.047046 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.047244 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 28 02:47:28.047465 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.047679 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 28 02:47:28.047888 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.048092 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 28 02:47:28.048315 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.048497 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 28 02:47:28.050108 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.051950 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 28 02:47:28.052177 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.052366 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 28 02:47:28.052602 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.053875 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 28 02:47:28.054105 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 28 02:47:28.054306 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 28 02:47:28.054542 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 28 02:47:28.055798 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 02:47:28.055997 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 28 02:47:28.056219 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 28 02:47:28.056404 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 28 02:47:28.057706 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 28 02:47:28.057905 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 02:47:28.058088 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 28 02:47:28.058282 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 28 02:47:28.058499 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 02:47:28.059721 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 02:47:28.059965 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 02:47:28.060172 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Apr 28 02:47:28.060353 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 28 02:47:28.060576 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 02:47:28.061803 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 02:47:28.062031 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 28 02:47:28.062264 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 28 02:47:28.062458 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 28 02:47:28.062670 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 28 02:47:28.062850 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 28 02:47:28.063066 kernel: pci_bus 0000:02: extended config space not accessible Apr 28 02:47:28.063291 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 28 02:47:28.063484 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 28 02:47:28.065747 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 28 02:47:28.065945 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 28 02:47:28.066175 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 28 02:47:28.066364 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 28 02:47:28.066554 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 28 02:47:28.066790 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 28 02:47:28.066968 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 28 02:47:28.067230 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 28 02:47:28.067427 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 28 02:47:28.067607 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 28 02:47:28.068880 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 28 02:47:28.069060 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 28 02:47:28.069279 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 28 02:47:28.069468 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 28 02:47:28.069680 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 28 02:47:28.069894 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 28 02:47:28.070071 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 28 02:47:28.070273 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 28 02:47:28.070462 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 28 02:47:28.072641 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 28 02:47:28.072846 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 28 02:47:28.073034 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 28 02:47:28.073232 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 28 02:47:28.073430 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 28 02:47:28.073615 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 28 02:47:28.074838 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 28 02:47:28.075018 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 28 02:47:28.075038 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 02:47:28.075053 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 02:47:28.075066 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 02:47:28.075080 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 02:47:28.075094 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 02:47:28.075115 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 02:47:28.075129 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 02:47:28.075142 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 02:47:28.075167 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 02:47:28.075181 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 02:47:28.075194 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 02:47:28.075207 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 02:47:28.075220 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 02:47:28.075234 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 02:47:28.075253 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 02:47:28.075266 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 02:47:28.075280 kernel: iommu: Default domain type: Translated Apr 28 02:47:28.075293 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 02:47:28.075306 kernel: PCI: Using ACPI for IRQ routing Apr 28 02:47:28.075320 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 02:47:28.075333 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 02:47:28.075346 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 28 02:47:28.075529 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 02:47:28.077751 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 02:47:28.077956 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 02:47:28.077977 kernel: vgaarb: loaded Apr 28 02:47:28.077991 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 02:47:28.078013 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 02:47:28.078026 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 02:47:28.078040 kernel: pnp: PnP ACPI init Apr 28 02:47:28.078264 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 02:47:28.078294 kernel: pnp: PnP ACPI: found 5 devices Apr 28 02:47:28.078308 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 02:47:28.078322 kernel: NET: Registered PF_INET protocol family Apr 28 02:47:28.078335 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 02:47:28.078349 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 28 02:47:28.078362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 02:47:28.078375 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 28 02:47:28.078388 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 28 02:47:28.078407 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 28 02:47:28.078420 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 28 02:47:28.078434 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 28 02:47:28.078466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 02:47:28.078479 kernel: NET: Registered PF_XDP protocol family Apr 28 02:47:28.078686 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Apr 28 02:47:28.078876 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 28 02:47:28.079065 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 28 02:47:28.079287 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 28 02:47:28.079464 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 28 02:47:28.081669 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 28 02:47:28.081851 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 28 02:47:28.082030 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 28 02:47:28.082239 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 28 02:47:28.082426 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 28 02:47:28.082637 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 28 02:47:28.084830 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 28 02:47:28.085006 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 28 02:47:28.085200 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 28 02:47:28.085378 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 28 02:47:28.085565 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 28 02:47:28.085788 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 28 02:47:28.086012 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 28 02:47:28.086205 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 28 02:47:28.086384 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 28 02:47:28.086582 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 28 02:47:28.086794 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 28 02:47:28.086992 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 28 02:47:28.087180 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 28 02:47:28.087358 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 28 02:47:28.087538 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 28 02:47:28.089771 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 28 02:47:28.089968 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 28 02:47:28.090164 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 28 02:47:28.090357 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 28 02:47:28.090537 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 28 02:47:28.090800 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 28 02:47:28.091093 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 28 02:47:28.091318 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 28 02:47:28.091566 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 28 02:47:28.093810 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 28 02:47:28.094110 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 28 02:47:28.094319 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 28 02:47:28.094517 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 28 02:47:28.096735 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 28 02:47:28.096923 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 28 02:47:28.097101 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 28 02:47:28.097292 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 28 02:47:28.097469 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 28 02:47:28.097673 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 28 02:47:28.097859 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 28 02:47:28.098061 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 28 02:47:28.098256 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 28 02:47:28.098435 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 28 02:47:28.098625 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 28 02:47:28.098798 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 02:47:28.098982 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 02:47:28.099169 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 02:47:28.099335 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 28 02:47:28.099507 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 02:47:28.099707 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 28 02:47:28.099901 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 28 02:47:28.100075 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 28 02:47:28.100259 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 28 02:47:28.100456 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 28 02:47:28.100707 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Apr 28 02:47:28.100891 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 28 02:47:28.101060 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 28 02:47:28.101251 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Apr 28 02:47:28.101420 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 28 02:47:28.101592 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 28 02:47:28.101811 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 28 02:47:28.101995 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 28 02:47:28.102206 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 28 02:47:28.102399 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Apr 28 02:47:28.102620 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 28 02:47:28.102867 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 28 02:47:28.103081 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Apr 28 02:47:28.103276 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 28 02:47:28.103454 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 28 02:47:28.103680 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Apr 28 02:47:28.103854 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 28 02:47:28.104046 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 28 02:47:28.104251 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Apr 28 02:47:28.104433 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 28 02:47:28.104607 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 28 02:47:28.104681 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 02:47:28.104697 kernel: PCI: CLS 0 bytes, default 64 Apr 28 02:47:28.104712 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 28 02:47:28.104726 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 28 02:47:28.104740 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 02:47:28.104755 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 28 02:47:28.104769 kernel: Initialise system trusted keyrings Apr 28 02:47:28.104783 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 28 02:47:28.104803 kernel: Key type asymmetric registered Apr 28 02:47:28.104817 kernel: Asymmetric key parser 'x509' registered Apr 28 02:47:28.104831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 02:47:28.104845 kernel: io scheduler mq-deadline registered Apr 28 02:47:28.104859 kernel: io scheduler kyber registered Apr 28 02:47:28.104873 kernel: io scheduler bfq registered Apr 28 02:47:28.105053 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 28 02:47:28.105248 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 28 02:47:28.105428 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.105674 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 28 02:47:28.105856 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 28 02:47:28.106036 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.106230 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 28 02:47:28.106410 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 28 02:47:28.106587 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.106802 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 28 02:47:28.106982 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 28 02:47:28.107174 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.107355 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 28 02:47:28.107534 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 28 02:47:28.107758 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.107947 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 28 02:47:28.108127 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 28 02:47:28.108317 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.108495 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 28 02:47:28.108700 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 28 02:47:28.108879 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.109065 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 28 02:47:28.109256 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 28 02:47:28.109441 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 02:47:28.109463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 02:47:28.109478 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 02:47:28.109493 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 02:47:28.109507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 02:47:28.109528 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 02:47:28.109542 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 02:47:28.109557 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 02:47:28.109570 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 02:47:28.109798 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 28 02:47:28.109821 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 02:47:28.109984 kernel: rtc_cmos 00:03: registered as rtc0 Apr 28 02:47:28.110162 kernel: rtc_cmos 00:03: setting system clock to 2026-04-28T02:47:27 UTC (1777344447) Apr 28 02:47:28.110340 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 28 02:47:28.110361 kernel: intel_pstate: CPU model not supported Apr 28 02:47:28.110375 kernel: NET: Registered PF_INET6 protocol family Apr 28 02:47:28.110389 kernel: Segment Routing with IPv6 Apr 28 02:47:28.110403 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 02:47:28.110417 kernel: NET: Registered PF_PACKET protocol family Apr 28 02:47:28.110430 kernel: Key type dns_resolver registered Apr 28 02:47:28.110444 kernel: IPI shorthand broadcast: enabled Apr 28 02:47:28.110458 kernel: sched_clock: Marking stable (1277024777, 238190866)->(1643917989, -128702346) Apr 28 02:47:28.110479 kernel: registered taskstats version 1 Apr 28 02:47:28.110493 kernel: Loading compiled-in X.509 certificates Apr 28 02:47:28.110507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 02:47:28.110520 kernel: Key type .fscrypt registered Apr 28 02:47:28.110533 kernel: Key type fscrypt-provisioning registered Apr 28 02:47:28.110547 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 02:47:28.110561 kernel: ima: Allocated hash algorithm: sha1 Apr 28 02:47:28.110575 kernel: ima: No architecture policies found Apr 28 02:47:28.110589 kernel: clk: Disabling unused clocks Apr 28 02:47:28.110608 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 02:47:28.110648 kernel: Write protecting the kernel read-only data: 36864k Apr 28 02:47:28.110663 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 02:47:28.110676 kernel: Run /init as init process Apr 28 02:47:28.110690 kernel: with arguments: Apr 28 02:47:28.110704 kernel: /init Apr 28 02:47:28.110717 kernel: with environment: Apr 28 02:47:28.110730 kernel: HOME=/ Apr 28 02:47:28.110744 kernel: TERM=linux Apr 28 02:47:28.110767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:47:28.110785 systemd[1]: Detected virtualization kvm. Apr 28 02:47:28.110800 systemd[1]: Detected architecture x86-64. Apr 28 02:47:28.110815 systemd[1]: Running in initrd. Apr 28 02:47:28.110829 systemd[1]: No hostname configured, using default hostname. Apr 28 02:47:28.110844 systemd[1]: Hostname set to . Apr 28 02:47:28.110859 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:47:28.110879 systemd[1]: Queued start job for default target initrd.target. Apr 28 02:47:28.110895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:47:28.110910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:47:28.110925 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 02:47:28.110941 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:47:28.110956 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 02:47:28.110971 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 02:47:28.110993 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 02:47:28.111009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 02:47:28.111025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:47:28.111040 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:47:28.111055 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:47:28.111075 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:47:28.111090 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:47:28.111105 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:47:28.111125 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:47:28.111140 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:47:28.111167 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 02:47:28.111182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 02:47:28.111197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:47:28.111212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:47:28.111227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:47:28.111242 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:47:28.111257 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 02:47:28.111278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:47:28.111293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 02:47:28.111308 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 02:47:28.111323 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:47:28.111338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:47:28.111353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:47:28.111418 systemd-journald[203]: Collecting audit messages is disabled. Apr 28 02:47:28.111458 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 02:47:28.111474 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:47:28.111489 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 02:47:28.111511 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:47:28.111526 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 02:47:28.111541 kernel: Bridge firewalling registered Apr 28 02:47:28.111556 systemd-journald[203]: Journal started Apr 28 02:47:28.111587 systemd-journald[203]: Runtime Journal (/run/log/journal/5c69c1dac1ac4093b8ca580bedb588dd) is 4.7M, max 38.0M, 33.2M free. Apr 28 02:47:28.050883 systemd-modules-load[204]: Inserted module 'overlay' Apr 28 02:47:28.094223 systemd-modules-load[204]: Inserted module 'br_netfilter' Apr 28 02:47:28.165396 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:47:28.165406 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:47:28.166427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:47:28.171703 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:47:28.176815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:47:28.188806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:47:28.192804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:47:28.204839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:47:28.206760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:47:28.218688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:47:28.223727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:47:28.231818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:47:28.232852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:47:28.238838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 02:47:28.259740 dracut-cmdline[240]: dracut-dracut-053 Apr 28 02:47:28.265918 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 02:47:28.276514 systemd-resolved[238]: Positive Trust Anchors: Apr 28 02:47:28.276552 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:47:28.276596 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:47:28.281781 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 28 02:47:28.283474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:47:28.284772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:47:28.379722 kernel: SCSI subsystem initialized Apr 28 02:47:28.391662 kernel: Loading iSCSI transport class v2.0-870. Apr 28 02:47:28.405665 kernel: iscsi: registered transport (tcp) Apr 28 02:47:28.432991 kernel: iscsi: registered transport (qla4xxx) Apr 28 02:47:28.433037 kernel: QLogic iSCSI HBA Driver Apr 28 02:47:28.492020 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 02:47:28.497804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 02:47:28.539579 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 02:47:28.539669 kernel: device-mapper: uevent: version 1.0.3 Apr 28 02:47:28.542637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 02:47:28.590651 kernel: raid6: sse2x4 gen() 12420 MB/s Apr 28 02:47:28.608649 kernel: raid6: sse2x2 gen() 9001 MB/s Apr 28 02:47:28.627269 kernel: raid6: sse2x1 gen() 9899 MB/s Apr 28 02:47:28.627334 kernel: raid6: using algorithm sse2x4 gen() 12420 MB/s Apr 28 02:47:28.646388 kernel: raid6: .... xor() 7465 MB/s, rmw enabled Apr 28 02:47:28.646466 kernel: raid6: using ssse3x2 recovery algorithm Apr 28 02:47:28.673703 kernel: xor: automatically using best checksumming function avx Apr 28 02:47:28.870705 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 02:47:28.885416 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:47:28.893910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:47:28.913490 systemd-udevd[423]: Using default interface naming scheme 'v255'. Apr 28 02:47:28.920982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:47:28.931778 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 02:47:28.952227 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Apr 28 02:47:28.991537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:47:28.999812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:47:29.117018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:47:29.125878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 02:47:29.157195 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 02:47:29.159247 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:47:29.160541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:47:29.161505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:47:29.170390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 02:47:29.192006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:47:29.241819 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 28 02:47:29.255012 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 02:47:29.260756 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 28 02:47:29.287221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 02:47:29.287279 kernel: GPT:17805311 != 125829119 Apr 28 02:47:29.287300 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 02:47:29.287318 kernel: GPT:17805311 != 125829119 Apr 28 02:47:29.287345 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 02:47:29.287364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:47:29.296073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:47:29.296262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:47:29.301371 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:47:29.302525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:47:29.313278 kernel: AVX version of gcm_enc/dec engaged. Apr 28 02:47:29.313310 kernel: AES CTR mode by8 optimization enabled Apr 28 02:47:29.302738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:47:29.311676 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:47:29.316067 kernel: libata version 3.00 loaded. Apr 28 02:47:29.320990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:47:29.354587 kernel: ACPI: bus type USB registered Apr 28 02:47:29.354723 kernel: usbcore: registered new interface driver usbfs Apr 28 02:47:29.356631 kernel: usbcore: registered new interface driver hub Apr 28 02:47:29.365647 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 02:47:29.369661 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 02:47:29.369709 kernel: usbcore: registered new device driver usb Apr 28 02:47:29.371634 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 02:47:29.371928 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 02:47:29.403058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 02:47:29.495783 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 28 02:47:29.495823 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Apr 28 02:47:29.495845 kernel: scsi host0: ahci Apr 28 02:47:29.496165 kernel: scsi host1: ahci Apr 28 02:47:29.496382 kernel: scsi host2: ahci Apr 28 02:47:29.496611 kernel: scsi host3: ahci Apr 28 02:47:29.496848 kernel: scsi host4: ahci Apr 28 02:47:29.497065 kernel: scsi host5: ahci Apr 28 02:47:29.497298 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Apr 28 02:47:29.497321 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Apr 28 02:47:29.497340 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Apr 28 02:47:29.497359 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Apr 28 02:47:29.497377 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Apr 28 02:47:29.497407 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Apr 28 02:47:29.496850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:47:29.504997 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 02:47:29.512473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:47:29.518717 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 02:47:29.519548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 02:47:29.527825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 02:47:29.529615 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 02:47:29.540587 disk-uuid[565]: Primary Header is updated. Apr 28 02:47:29.540587 disk-uuid[565]: Secondary Entries is updated. Apr 28 02:47:29.540587 disk-uuid[565]: Secondary Header is updated. Apr 28 02:47:29.549640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:47:29.557128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:47:29.561887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:47:29.567634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:47:29.738080 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.738172 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.738893 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.743655 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.743705 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.745847 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 02:47:29.760820 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 28 02:47:29.761121 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 28 02:47:29.764692 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 28 02:47:29.771581 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 28 02:47:29.771881 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 28 02:47:29.772137 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 28 02:47:29.785630 kernel: hub 1-0:1.0: USB hub found Apr 28 02:47:29.789629 kernel: hub 1-0:1.0: 4 ports detected Apr 28 02:47:29.792651 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 28 02:47:29.795632 kernel: hub 2-0:1.0: USB hub found Apr 28 02:47:29.797646 kernel: hub 2-0:1.0: 4 ports detected Apr 28 02:47:30.028681 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 28 02:47:30.170667 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 28 02:47:30.177125 kernel: usbcore: registered new interface driver usbhid Apr 28 02:47:30.177161 kernel: usbhid: USB HID core driver Apr 28 02:47:30.186938 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 28 02:47:30.186985 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 28 02:47:30.565648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 02:47:30.568692 disk-uuid[567]: The operation has completed successfully. Apr 28 02:47:30.621064 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 02:47:30.621252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 02:47:30.648836 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 02:47:30.653241 sh[590]: Success Apr 28 02:47:30.669663 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 28 02:47:30.738322 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 02:47:30.747745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 02:47:30.749750 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 02:47:30.781123 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 02:47:30.781183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:47:30.781205 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 02:47:30.785509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 02:47:30.785578 kernel: BTRFS info (device dm-0): using free space tree Apr 28 02:47:30.796494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 02:47:30.797883 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 02:47:30.803854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 02:47:30.805574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 02:47:30.824762 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:47:30.824809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:47:30.826724 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:47:30.840649 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:47:30.854360 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 02:47:30.857229 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:47:30.866854 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 02:47:30.873796 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 02:47:30.947122 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:47:30.958860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:47:30.999217 systemd-networkd[771]: lo: Link UP Apr 28 02:47:31.000262 systemd-networkd[771]: lo: Gained carrier Apr 28 02:47:31.003437 systemd-networkd[771]: Enumeration completed Apr 28 02:47:31.003559 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:47:31.004499 systemd[1]: Reached target network.target - Network. Apr 28 02:47:31.007244 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:47:31.007249 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:47:31.008725 systemd-networkd[771]: eth0: Link UP Apr 28 02:47:31.008731 systemd-networkd[771]: eth0: Gained carrier Apr 28 02:47:31.008749 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:47:31.037722 systemd-networkd[771]: eth0: DHCPv4 address 10.230.12.190/30, gateway 10.230.12.189 acquired from 10.230.12.189 Apr 28 02:47:31.044025 ignition[699]: Ignition 2.19.0 Apr 28 02:47:31.045059 ignition[699]: Stage: fetch-offline Apr 28 02:47:31.045177 ignition[699]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:31.045203 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:31.045441 ignition[699]: parsed url from cmdline: "" Apr 28 02:47:31.045447 ignition[699]: no config URL provided Apr 28 02:47:31.045457 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 02:47:31.045484 ignition[699]: no config at "/usr/lib/ignition/user.ign" Apr 28 02:47:31.045498 ignition[699]: failed to fetch config: resource requires networking Apr 28 02:47:31.045822 ignition[699]: Ignition finished successfully Apr 28 02:47:31.052200 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:47:31.060844 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 28 02:47:31.094206 ignition[779]: Ignition 2.19.0 Apr 28 02:47:31.094226 ignition[779]: Stage: fetch Apr 28 02:47:31.094518 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:31.094550 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:31.094735 ignition[779]: parsed url from cmdline: "" Apr 28 02:47:31.094742 ignition[779]: no config URL provided Apr 28 02:47:31.094752 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 02:47:31.094768 ignition[779]: no config at "/usr/lib/ignition/user.ign" Apr 28 02:47:31.095034 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 28 02:47:31.095385 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 28 02:47:31.095456 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 28 02:47:31.110997 ignition[779]: GET result: OK Apr 28 02:47:31.111886 ignition[779]: parsing config with SHA512: 539ec78aaa1ba37b6d29daecfac62559060b4a5f88b9f7dcdbe5ce858aac4fba666997600301cbb6ad872ca83495dad7897b4631c8963eb8a1715c5704403ada Apr 28 02:47:31.119826 unknown[779]: fetched base config from "system" Apr 28 02:47:31.119850 unknown[779]: fetched base config from "system" Apr 28 02:47:31.120490 ignition[779]: fetch: fetch complete Apr 28 02:47:31.119860 unknown[779]: fetched user config from "openstack" Apr 28 02:47:31.120515 ignition[779]: fetch: fetch passed Apr 28 02:47:31.122759 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 28 02:47:31.120612 ignition[779]: Ignition finished successfully Apr 28 02:47:31.134867 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 02:47:31.155487 ignition[785]: Ignition 2.19.0 Apr 28 02:47:31.155509 ignition[785]: Stage: kargs Apr 28 02:47:31.155842 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:31.155876 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:31.157702 ignition[785]: kargs: kargs passed Apr 28 02:47:31.157793 ignition[785]: Ignition finished successfully Apr 28 02:47:31.161071 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 02:47:31.168857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 02:47:31.191083 ignition[791]: Ignition 2.19.0 Apr 28 02:47:31.191105 ignition[791]: Stage: disks Apr 28 02:47:31.191366 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:31.194000 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 02:47:31.191395 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:31.195901 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 02:47:31.192562 ignition[791]: disks: disks passed Apr 28 02:47:31.197794 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 02:47:31.192658 ignition[791]: Ignition finished successfully Apr 28 02:47:31.199498 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:47:31.200876 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:47:31.202405 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:47:31.210820 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 02:47:31.231865 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 28 02:47:31.235354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 02:47:31.242734 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 02:47:31.361645 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 02:47:31.361128 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 02:47:31.363401 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 02:47:31.369733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:47:31.378783 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 02:47:31.381396 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 02:47:31.383842 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 28 02:47:31.385699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 02:47:31.385741 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:47:31.389169 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 02:47:31.400364 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Apr 28 02:47:31.400409 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:47:31.400440 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:47:31.400477 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:47:31.404628 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:47:31.406839 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 02:47:31.410399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:47:31.504002 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 02:47:31.518902 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 28 02:47:31.527778 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 02:47:31.533606 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 02:47:31.646659 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 02:47:31.654760 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 02:47:31.658819 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 02:47:31.669642 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:47:31.697749 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 02:47:31.704421 ignition[926]: INFO : Ignition 2.19.0 Apr 28 02:47:31.704421 ignition[926]: INFO : Stage: mount Apr 28 02:47:31.706135 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:31.706135 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:31.708320 ignition[926]: INFO : mount: mount passed Apr 28 02:47:31.708320 ignition[926]: INFO : Ignition finished successfully Apr 28 02:47:31.709055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 02:47:31.777689 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 02:47:32.360952 systemd-networkd[771]: eth0: Gained IPv6LL Apr 28 02:47:33.867412 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:832f:24:19ff:fee6:cbe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:832f:24:19ff:fee6:cbe/64 assigned by NDisc. Apr 28 02:47:33.867425 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 28 02:47:38.575989 coreos-metadata[809]: Apr 28 02:47:38.575 WARN failed to locate config-drive, using the metadata service API instead Apr 28 02:47:38.599701 coreos-metadata[809]: Apr 28 02:47:38.599 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 28 02:47:38.614185 coreos-metadata[809]: Apr 28 02:47:38.614 INFO Fetch successful Apr 28 02:47:38.615077 coreos-metadata[809]: Apr 28 02:47:38.615 INFO wrote hostname srv-4dua5.gb1.brightbox.com to /sysroot/etc/hostname Apr 28 02:47:38.616576 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 28 02:47:38.616784 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 28 02:47:38.631760 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 02:47:38.640447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 02:47:38.659639 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Apr 28 02:47:38.659696 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 02:47:38.661997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 02:47:38.662037 kernel: BTRFS info (device vda6): using free space tree Apr 28 02:47:38.668640 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 02:47:38.670806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 02:47:38.702547 ignition[961]: INFO : Ignition 2.19.0 Apr 28 02:47:38.703716 ignition[961]: INFO : Stage: files Apr 28 02:47:38.704398 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:38.704398 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:38.706209 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 28 02:47:38.707628 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 02:47:38.707628 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 02:47:38.711111 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 02:47:38.712160 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 02:47:38.713453 unknown[961]: wrote ssh authorized keys file for user: core Apr 28 02:47:38.714476 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 02:47:38.715811 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:47:38.717125 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 02:47:38.878736 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 02:47:39.174950 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:47:39.176668 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:47:39.198452 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 02:47:39.575871 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 02:47:42.097851 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 02:47:42.097851 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 02:47:42.100667 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:47:42.110987 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 02:47:42.110987 ignition[961]: INFO : files: files passed Apr 28 02:47:42.110987 ignition[961]: INFO : Ignition finished successfully Apr 28 02:47:42.103197 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 02:47:42.113908 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 02:47:42.117829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 02:47:42.126996 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 02:47:42.127204 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 02:47:42.140101 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:47:42.140101 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:47:42.143575 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 02:47:42.145906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:47:42.147372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 02:47:42.151821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 02:47:42.198037 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 02:47:42.198187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 02:47:42.200118 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 02:47:42.201854 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 02:47:42.203453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 02:47:42.212930 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 02:47:42.231186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:47:42.237848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 02:47:42.261737 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:47:42.262800 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:47:42.264518 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 02:47:42.266206 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 02:47:42.266394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 02:47:42.268304 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 02:47:42.269297 systemd[1]: Stopped target basic.target - Basic System. Apr 28 02:47:42.270779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 02:47:42.272222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 02:47:42.273864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 02:47:42.275398 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 02:47:42.276990 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 02:47:42.278737 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 02:47:42.280208 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 02:47:42.281820 systemd[1]: Stopped target swap.target - Swaps. Apr 28 02:47:42.283174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 02:47:42.283394 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 02:47:42.285284 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:47:42.287023 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:47:42.288456 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 02:47:42.288666 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:47:42.290088 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 02:47:42.290251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 02:47:42.292227 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 02:47:42.292398 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 02:47:42.294391 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 02:47:42.294559 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 02:47:42.305936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 02:47:42.309860 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 02:47:42.310568 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 02:47:42.310814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:47:42.315054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 02:47:42.315236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 02:47:42.327990 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 02:47:42.329286 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 02:47:42.334374 ignition[1013]: INFO : Ignition 2.19.0 Apr 28 02:47:42.336636 ignition[1013]: INFO : Stage: umount Apr 28 02:47:42.336636 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 02:47:42.336636 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 28 02:47:42.348804 ignition[1013]: INFO : umount: umount passed Apr 28 02:47:42.348804 ignition[1013]: INFO : Ignition finished successfully Apr 28 02:47:42.345018 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 02:47:42.345221 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 02:47:42.349182 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 02:47:42.350805 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 02:47:42.350879 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 02:47:42.352821 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 02:47:42.352907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 02:47:42.355447 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 28 02:47:42.355521 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 28 02:47:42.356280 systemd[1]: Stopped target network.target - Network. Apr 28 02:47:42.357597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 02:47:42.357688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 02:47:42.359181 systemd[1]: Stopped target paths.target - Path Units. Apr 28 02:47:42.360534 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 02:47:42.363699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:47:42.364565 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 02:47:42.366062 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 02:47:42.367670 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 02:47:42.367767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 02:47:42.369155 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 02:47:42.369257 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 02:47:42.370506 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 02:47:42.370596 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 02:47:42.372054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 02:47:42.372122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 02:47:42.374089 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 02:47:42.375879 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 02:47:42.377853 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 02:47:42.378011 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 02:47:42.380292 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 02:47:42.380416 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 02:47:42.380740 systemd-networkd[771]: eth0: DHCPv6 lease lost Apr 28 02:47:42.385223 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 02:47:42.385411 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 02:47:42.389532 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 02:47:42.389836 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 02:47:42.393415 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 02:47:42.393735 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:47:42.401842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 02:47:42.403095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 02:47:42.403199 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 02:47:42.407115 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 02:47:42.407207 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:47:42.408926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 02:47:42.408998 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 02:47:42.410331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 02:47:42.410400 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:47:42.412222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:47:42.432060 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 02:47:42.433177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:47:42.435074 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 02:47:42.435226 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 02:47:42.437432 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 02:47:42.437593 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 02:47:42.439063 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 02:47:42.439122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:47:42.440704 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 02:47:42.440794 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 02:47:42.443085 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 02:47:42.443153 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 02:47:42.444858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 02:47:42.444928 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 02:47:42.452873 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 02:47:42.454187 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 02:47:42.454262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:47:42.458136 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 02:47:42.458207 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:47:42.463139 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 02:47:42.463249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:47:42.464063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 02:47:42.464141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:47:42.465576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 02:47:42.465821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 02:47:42.469299 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 02:47:42.475872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 02:47:42.489642 systemd[1]: Switching root. Apr 28 02:47:42.530323 systemd-journald[203]: Journal stopped Apr 28 02:47:44.026676 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Apr 28 02:47:44.026872 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 02:47:44.026925 kernel: SELinux: policy capability open_perms=1 Apr 28 02:47:44.026965 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 02:47:44.027002 kernel: SELinux: policy capability always_check_network=0 Apr 28 02:47:44.027030 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 02:47:44.027061 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 02:47:44.027087 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 02:47:44.027121 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 02:47:44.027142 kernel: audit: type=1403 audit(1777344462.679:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 02:47:44.027173 systemd[1]: Successfully loaded SELinux policy in 51.778ms. Apr 28 02:47:44.027234 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.100ms. Apr 28 02:47:44.027279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 02:47:44.027302 systemd[1]: Detected virtualization kvm. Apr 28 02:47:44.027330 systemd[1]: Detected architecture x86-64. Apr 28 02:47:44.027351 systemd[1]: Detected first boot. Apr 28 02:47:44.027378 systemd[1]: Hostname set to . Apr 28 02:47:44.027400 systemd[1]: Initializing machine ID from VM UUID. Apr 28 02:47:44.027421 zram_generator::config[1056]: No configuration found. Apr 28 02:47:44.027462 systemd[1]: Populated /etc with preset unit settings. Apr 28 02:47:44.027496 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 02:47:44.027524 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 02:47:44.027555 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 02:47:44.027588 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 02:47:44.029705 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 02:47:44.029742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 02:47:44.029774 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 02:47:44.029806 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 02:47:44.029828 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 02:47:44.029863 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 02:47:44.029886 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 02:47:44.029906 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 02:47:44.029928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 02:47:44.029956 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 02:47:44.029978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 02:47:44.029998 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 02:47:44.030040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 02:47:44.030074 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 02:47:44.030110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 02:47:44.030146 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 02:47:44.030186 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 02:47:44.030210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 02:47:44.030230 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 02:47:44.030267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 02:47:44.030319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 02:47:44.030349 systemd[1]: Reached target slices.target - Slice Units. Apr 28 02:47:44.030383 systemd[1]: Reached target swap.target - Swaps. Apr 28 02:47:44.030413 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 02:47:44.030443 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 02:47:44.030476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 02:47:44.030504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 02:47:44.030539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 02:47:44.030596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 02:47:44.030639 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 02:47:44.030665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 02:47:44.030704 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 02:47:44.030737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:44.030762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 02:47:44.030783 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 02:47:44.030822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 02:47:44.030852 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 02:47:44.030874 systemd[1]: Reached target machines.target - Containers. Apr 28 02:47:44.030895 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 02:47:44.030916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:47:44.030936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 02:47:44.030957 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 02:47:44.030977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:47:44.031009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:47:44.031033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:47:44.031053 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 02:47:44.031101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:47:44.031123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 02:47:44.031156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 02:47:44.031175 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 02:47:44.031194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 02:47:44.031237 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 02:47:44.031263 kernel: loop: module loaded Apr 28 02:47:44.031289 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 02:47:44.031310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 02:47:44.031329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 02:47:44.031354 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 02:47:44.031375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 02:47:44.031401 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 02:47:44.031422 systemd[1]: Stopped verity-setup.service. Apr 28 02:47:44.031441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:44.031473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 02:47:44.031493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 02:47:44.031517 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 02:47:44.031550 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 02:47:44.031584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 02:47:44.033725 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 02:47:44.033756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 02:47:44.033786 kernel: fuse: init (API version 7.39) Apr 28 02:47:44.033808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 02:47:44.033828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 02:47:44.033849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:47:44.033870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:47:44.033905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:47:44.033958 systemd-journald[1142]: Collecting audit messages is disabled. Apr 28 02:47:44.034022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:47:44.034044 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 02:47:44.034069 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 02:47:44.034100 systemd-journald[1142]: Journal started Apr 28 02:47:44.034143 systemd-journald[1142]: Runtime Journal (/run/log/journal/5c69c1dac1ac4093b8ca580bedb588dd) is 4.7M, max 38.0M, 33.2M free. Apr 28 02:47:43.586167 systemd[1]: Queued start job for default target multi-user.target. Apr 28 02:47:43.616165 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 02:47:43.616986 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 02:47:44.037752 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 02:47:44.039534 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:47:44.040856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:47:44.042461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 02:47:44.044132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 02:47:44.045684 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 02:47:44.062729 kernel: ACPI: bus type drm_connector registered Apr 28 02:47:44.066699 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:47:44.066996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:47:44.070261 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 02:47:44.079734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 02:47:44.087738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 02:47:44.089778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 02:47:44.089833 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 02:47:44.094627 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 02:47:44.102336 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 02:47:44.110827 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 02:47:44.111811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:47:44.116754 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 02:47:44.131973 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 02:47:44.132887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:47:44.141774 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 02:47:44.142670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:47:44.144217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 02:47:44.152274 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 02:47:44.165006 systemd-journald[1142]: Time spent on flushing to /var/log/journal/5c69c1dac1ac4093b8ca580bedb588dd is 175.658ms for 1134 entries. Apr 28 02:47:44.165006 systemd-journald[1142]: System Journal (/var/log/journal/5c69c1dac1ac4093b8ca580bedb588dd) is 8.0M, max 584.8M, 576.8M free. Apr 28 02:47:44.378359 systemd-journald[1142]: Received client request to flush runtime journal. Apr 28 02:47:44.378444 kernel: loop0: detected capacity change from 0 to 142488 Apr 28 02:47:44.378471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 02:47:44.169920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 02:47:44.176757 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 02:47:44.383720 kernel: loop1: detected capacity change from 0 to 8 Apr 28 02:47:44.178261 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 02:47:44.180274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 02:47:44.182088 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 02:47:44.183847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 02:47:44.198024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 02:47:44.207903 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 02:47:44.291370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 02:47:44.298026 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 02:47:44.342549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 02:47:44.361780 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 28 02:47:44.361803 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 28 02:47:44.388766 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 02:47:44.391412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 02:47:44.406820 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 02:47:44.417668 kernel: loop2: detected capacity change from 0 to 140768 Apr 28 02:47:44.475002 kernel: loop3: detected capacity change from 0 to 228704 Apr 28 02:47:44.486447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 02:47:44.500197 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 02:47:44.540792 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 02:47:44.553024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 02:47:44.558687 kernel: loop4: detected capacity change from 0 to 142488 Apr 28 02:47:44.560187 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 02:47:44.593646 kernel: loop5: detected capacity change from 0 to 8 Apr 28 02:47:44.607666 kernel: loop6: detected capacity change from 0 to 140768 Apr 28 02:47:44.624281 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Apr 28 02:47:44.624826 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Apr 28 02:47:44.633550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 02:47:44.654648 kernel: loop7: detected capacity change from 0 to 228704 Apr 28 02:47:44.686834 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 28 02:47:44.687863 (sd-merge)[1216]: Merged extensions into '/usr'. Apr 28 02:47:44.705891 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 02:47:44.705932 systemd[1]: Reloading... Apr 28 02:47:44.876716 zram_generator::config[1247]: No configuration found. Apr 28 02:47:45.075727 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 02:47:45.121255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:47:45.191434 systemd[1]: Reloading finished in 480 ms. Apr 28 02:47:45.230572 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 02:47:45.232120 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 02:47:45.245867 systemd[1]: Starting ensure-sysext.service... Apr 28 02:47:45.248258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 02:47:45.282062 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Apr 28 02:47:45.282107 systemd[1]: Reloading... Apr 28 02:47:45.285099 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 02:47:45.286471 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 02:47:45.289967 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 02:47:45.290522 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Apr 28 02:47:45.292794 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Apr 28 02:47:45.301119 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:47:45.301733 systemd-tmpfiles[1301]: Skipping /boot Apr 28 02:47:45.328255 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 02:47:45.328273 systemd-tmpfiles[1301]: Skipping /boot Apr 28 02:47:45.401646 zram_generator::config[1328]: No configuration found. Apr 28 02:47:45.587028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:47:45.658551 systemd[1]: Reloading finished in 375 ms. Apr 28 02:47:45.687943 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 02:47:45.692338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 02:47:45.709904 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:47:45.731869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 02:47:45.736926 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 02:47:45.747924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 02:47:45.753891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 02:47:45.762862 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 02:47:45.773722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.774011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:47:45.779979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:47:45.789958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 02:47:45.798026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 02:47:45.799055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:47:45.799241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.812964 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 02:47:45.814357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:47:45.814709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:47:45.823883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.824201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:47:45.831361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 02:47:45.833177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:47:45.833370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.835741 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 02:47:45.861202 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 02:47:45.870758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 02:47:45.878094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 02:47:45.878362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 02:47:45.880062 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 02:47:45.881514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 02:47:45.884764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 02:47:45.887920 systemd[1]: Finished ensure-sysext.service. Apr 28 02:47:45.899880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.900161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 02:47:45.906400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 02:47:45.908452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 02:47:45.908546 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 02:47:45.917900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 02:47:45.920745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 02:47:45.921430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 02:47:45.924392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 02:47:45.924684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 02:47:45.927336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 02:47:45.927392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 02:47:45.937324 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 02:47:45.938725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 02:47:45.940850 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Apr 28 02:47:45.958444 augenrules[1426]: No rules Apr 28 02:47:45.961748 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:47:45.964507 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 02:47:45.997165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 02:47:46.008862 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 02:47:46.143126 systemd-resolved[1390]: Positive Trust Anchors: Apr 28 02:47:46.143966 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 02:47:46.144014 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 02:47:46.156178 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 02:47:46.157380 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 02:47:46.161864 systemd-resolved[1390]: Using system hostname 'srv-4dua5.gb1.brightbox.com'. Apr 28 02:47:46.170981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 02:47:46.171939 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 02:47:46.192376 systemd-networkd[1442]: lo: Link UP Apr 28 02:47:46.193225 systemd-networkd[1442]: lo: Gained carrier Apr 28 02:47:46.195791 systemd-networkd[1442]: Enumeration completed Apr 28 02:47:46.195935 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 02:47:46.196900 systemd[1]: Reached target network.target - Network. Apr 28 02:47:46.207819 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 02:47:46.227479 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 02:47:46.268116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1443) Apr 28 02:47:46.309218 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:47:46.309403 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 02:47:46.312594 systemd-networkd[1442]: eth0: Link UP Apr 28 02:47:46.312759 systemd-networkd[1442]: eth0: Gained carrier Apr 28 02:47:46.312899 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 02:47:46.324733 systemd-networkd[1442]: eth0: DHCPv4 address 10.230.12.190/30, gateway 10.230.12.189 acquired from 10.230.12.189 Apr 28 02:47:46.328193 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Apr 28 02:47:46.344232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 02:47:46.356869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 02:47:46.393219 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 02:47:46.409694 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 28 02:47:46.421953 kernel: ACPI: button: Power Button [PWRF] Apr 28 02:47:46.443664 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 02:47:46.462695 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 02:47:46.472279 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 02:47:46.472591 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 02:47:46.484361 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 28 02:47:46.612994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 02:47:46.781519 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 02:47:46.814760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 02:47:46.822038 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 02:47:46.855709 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:47:46.889541 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 02:47:46.891569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 02:47:46.892490 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 02:47:46.893449 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 02:47:46.894525 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 02:47:46.895842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 02:47:46.896811 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 02:47:46.897594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 02:47:46.898404 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 02:47:46.898463 systemd[1]: Reached target paths.target - Path Units. Apr 28 02:47:46.899123 systemd[1]: Reached target timers.target - Timer Units. Apr 28 02:47:46.901681 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 02:47:46.905205 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 02:47:46.911311 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 02:47:46.914096 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 02:47:46.915678 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 02:47:46.916559 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 02:47:46.917297 systemd[1]: Reached target basic.target - Basic System. Apr 28 02:47:46.918097 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:47:46.918146 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 02:47:46.927652 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 02:47:46.930868 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 28 02:47:46.933938 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 02:47:46.945851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 02:47:46.950604 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 02:47:46.954861 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 02:47:46.955759 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 02:47:46.959866 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 02:47:46.966751 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 02:47:46.979050 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 02:47:46.985913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 02:47:46.995111 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 02:47:46.997916 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 02:47:46.998604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 02:47:47.002817 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 02:47:47.012796 jq[1484]: false Apr 28 02:47:47.015803 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 02:47:47.032389 dbus-daemon[1483]: [system] SELinux support is enabled Apr 28 02:47:47.036270 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 02:47:47.041836 dbus-daemon[1483]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1442 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 28 02:47:47.051398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 02:47:47.051793 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 02:47:47.061158 extend-filesystems[1485]: Found loop4 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found loop5 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found loop6 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found loop7 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda1 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda2 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda3 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found usr Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda4 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda6 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda7 Apr 28 02:47:47.064670 extend-filesystems[1485]: Found vda9 Apr 28 02:47:47.064670 extend-filesystems[1485]: Checking size of /dev/vda9 Apr 28 02:47:47.062289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 02:47:47.101201 jq[1495]: true Apr 28 02:47:47.101531 extend-filesystems[1485]: Resized partition /dev/vda9 Apr 28 02:47:47.103144 update_engine[1494]: I20260428 02:47:47.085932 1494 main.cc:92] Flatcar Update Engine starting Apr 28 02:47:47.103144 update_engine[1494]: I20260428 02:47:47.093354 1494 update_check_scheduler.cc:74] Next update check in 2m25s Apr 28 02:47:47.076042 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 28 02:47:47.063672 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 02:47:47.103929 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Apr 28 02:47:47.106981 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 28 02:47:47.074078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 02:47:47.074192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 02:47:47.079771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 02:47:47.079833 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 02:47:47.088485 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 02:47:47.109172 systemd[1]: Started update-engine.service - Update Engine. Apr 28 02:47:47.122876 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 28 02:47:47.127907 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 02:47:47.166972 tar[1498]: linux-amd64/LICENSE Apr 28 02:47:47.172869 tar[1498]: linux-amd64/helm Apr 28 02:47:47.174804 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 02:47:47.184973 jq[1512]: true Apr 28 02:47:47.207985 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 02:47:47.209305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 02:47:47.243870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1445) Apr 28 02:47:47.317783 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Apr 28 02:47:47.318345 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 02:47:47.319935 systemd-logind[1492]: New seat seat0. Apr 28 02:47:47.321565 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 02:47:47.478357 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 28 02:47:47.479246 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 02:47:47.493014 systemd[1]: Starting sshkeys.service... Apr 28 02:47:47.521122 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 28 02:47:47.521335 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 28 02:47:47.524902 dbus-daemon[1483]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1514 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 28 02:47:47.527648 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 28 02:47:47.539873 systemd[1]: Starting polkit.service - Authorization Manager... Apr 28 02:47:47.558808 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 02:47:47.564328 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 28 02:47:47.579092 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 28 02:47:47.581299 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 02:47:47.581299 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 28 02:47:47.581299 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 28 02:47:47.586299 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Apr 28 02:47:47.587316 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 02:47:47.588704 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 02:47:47.637641 polkitd[1553]: Started polkitd version 121 Apr 28 02:47:47.663906 polkitd[1553]: Loading rules from directory /etc/polkit-1/rules.d Apr 28 02:47:47.664045 polkitd[1553]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 28 02:47:47.665732 polkitd[1553]: Finished loading, compiling and executing 2 rules Apr 28 02:47:47.671663 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 28 02:47:47.671977 systemd[1]: Started polkit.service - Authorization Manager. Apr 28 02:47:47.674813 polkitd[1553]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 28 02:47:47.722634 systemd-hostnamed[1514]: Hostname set to (static) Apr 28 02:47:47.750099 containerd[1523]: time="2026-04-28T02:47:47.749957192Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 02:47:47.833052 containerd[1523]: time="2026-04-28T02:47:47.832274037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.838904 containerd[1523]: time="2026-04-28T02:47:47.838851276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:47:47.839663 containerd[1523]: time="2026-04-28T02:47:47.839631648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 02:47:47.839953 containerd[1523]: time="2026-04-28T02:47:47.839926434Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840329306Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840365290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840501890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840535274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840856531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840885024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840906667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.840923966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.841089587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.841972 containerd[1523]: time="2026-04-28T02:47:47.841508436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 02:47:47.845066 containerd[1523]: time="2026-04-28T02:47:47.845031665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 02:47:47.845519 containerd[1523]: time="2026-04-28T02:47:47.845491482Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 02:47:47.845817 containerd[1523]: time="2026-04-28T02:47:47.845788059Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 02:47:47.846380 containerd[1523]: time="2026-04-28T02:47:47.846353364Z" level=info msg="metadata content store policy set" policy=shared Apr 28 02:47:47.852983 containerd[1523]: time="2026-04-28T02:47:47.852947175Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 02:47:47.853178 containerd[1523]: time="2026-04-28T02:47:47.853149797Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 02:47:47.853682 containerd[1523]: time="2026-04-28T02:47:47.853653081Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 02:47:47.853790 containerd[1523]: time="2026-04-28T02:47:47.853766208Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 02:47:47.853913 containerd[1523]: time="2026-04-28T02:47:47.853888470Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 02:47:47.854500 containerd[1523]: time="2026-04-28T02:47:47.854207805Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 02:47:47.856564 containerd[1523]: time="2026-04-28T02:47:47.856509447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857198778Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857263372Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857307048Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857342086Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857388146Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857481424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857534119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857570080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857598508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857668044Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857697664Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857749565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857785193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.858975 containerd[1523]: time="2026-04-28T02:47:47.857810548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857839120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857871676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857903448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857930759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857957255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.857980138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858010324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858056501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858107411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858137665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858174790Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858252418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858283445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.859513 containerd[1523]: time="2026-04-28T02:47:47.858307649Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858393092Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858439593Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858468353Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858493578Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858516451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858545915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 02:47:47.860018 containerd[1523]: time="2026-04-28T02:47:47.858585491Z" level=info msg="NRI interface is disabled by configuration." Apr 28 02:47:47.868635 containerd[1523]: time="2026-04-28T02:47:47.865734981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 02:47:47.868734 containerd[1523]: time="2026-04-28T02:47:47.866216093Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 02:47:47.868734 containerd[1523]: time="2026-04-28T02:47:47.866320780Z" level=info msg="Connect containerd service" Apr 28 02:47:47.868734 containerd[1523]: time="2026-04-28T02:47:47.866397909Z" level=info msg="using legacy CRI server" Apr 28 02:47:47.868734 containerd[1523]: time="2026-04-28T02:47:47.866421993Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 02:47:47.868734 containerd[1523]: time="2026-04-28T02:47:47.866652217Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 02:47:47.871656 containerd[1523]: time="2026-04-28T02:47:47.871590599Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 02:47:47.872117 containerd[1523]: time="2026-04-28T02:47:47.872051607Z" level=info msg="Start subscribing containerd event" Apr 28 02:47:47.872178 containerd[1523]: time="2026-04-28T02:47:47.872139074Z" level=info msg="Start recovering state" Apr 28 02:47:47.872286 containerd[1523]: time="2026-04-28T02:47:47.872261277Z" level=info msg="Start event monitor" Apr 28 02:47:47.872340 containerd[1523]: time="2026-04-28T02:47:47.872293259Z" level=info msg="Start snapshots syncer" Apr 28 02:47:47.872340 containerd[1523]: time="2026-04-28T02:47:47.872321024Z" level=info msg="Start cni network conf syncer for default" Apr 28 02:47:47.872484 containerd[1523]: time="2026-04-28T02:47:47.872349024Z" level=info msg="Start streaming server" Apr 28 02:47:47.873169 containerd[1523]: time="2026-04-28T02:47:47.873140579Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 02:47:47.873888 containerd[1523]: time="2026-04-28T02:47:47.873861553Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 02:47:47.874228 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 02:47:47.876241 containerd[1523]: time="2026-04-28T02:47:47.876072324Z" level=info msg="containerd successfully booted in 0.131834s" Apr 28 02:47:48.009898 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 02:47:48.044962 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 02:47:48.058097 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 02:47:48.069116 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 02:47:48.069409 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 02:47:48.079452 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 02:47:48.095412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 02:47:48.105511 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 02:47:48.115896 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 02:47:48.117124 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 02:47:48.239205 tar[1498]: linux-amd64/README.md Apr 28 02:47:48.264838 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 02:47:48.297469 systemd-networkd[1442]: eth0: Gained IPv6LL Apr 28 02:47:48.298799 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Apr 28 02:47:48.302399 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 02:47:48.305743 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 02:47:48.314094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:47:48.331368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 02:47:48.359267 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 02:47:49.326731 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Apr 28 02:47:49.328885 systemd-networkd[1442]: eth0: Ignoring DHCPv6 address 2a02:1348:179:832f:24:19ff:fee6:cbe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:832f:24:19ff:fee6:cbe/64 assigned by NDisc. Apr 28 02:47:49.328897 systemd-networkd[1442]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 28 02:47:49.453754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:47:49.460489 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:47:50.196603 kubelet[1608]: E0428 02:47:50.196443 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:47:50.199734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:47:50.200071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:47:50.201723 systemd[1]: kubelet.service: Consumed 1.135s CPU time. Apr 28 02:47:51.122279 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Apr 28 02:47:51.818413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 02:47:51.844366 systemd[1]: Started sshd@0-10.230.12.190:22-4.175.71.9:41140.service - OpenSSH per-connection server daemon (4.175.71.9:41140). Apr 28 02:47:51.987604 sshd[1618]: Accepted publickey for core from 4.175.71.9 port 41140 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:47:51.991091 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:47:52.009374 systemd-logind[1492]: New session 1 of user core. Apr 28 02:47:52.012516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 02:47:52.019119 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 02:47:52.060477 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 02:47:52.072770 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 02:47:52.089993 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 02:47:52.242268 systemd[1622]: Queued start job for default target default.target. Apr 28 02:47:52.252430 systemd[1622]: Created slice app.slice - User Application Slice. Apr 28 02:47:52.252656 systemd[1622]: Reached target paths.target - Paths. Apr 28 02:47:52.252825 systemd[1622]: Reached target timers.target - Timers. Apr 28 02:47:52.255176 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 02:47:52.279355 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 02:47:52.280506 systemd[1622]: Reached target sockets.target - Sockets. Apr 28 02:47:52.280718 systemd[1622]: Reached target basic.target - Basic System. Apr 28 02:47:52.280802 systemd[1622]: Reached target default.target - Main User Target. Apr 28 02:47:52.280885 systemd[1622]: Startup finished in 180ms. Apr 28 02:47:52.280897 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 02:47:52.295941 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 02:47:52.429190 systemd[1]: Started sshd@1-10.230.12.190:22-4.175.71.9:41152.service - OpenSSH per-connection server daemon (4.175.71.9:41152). Apr 28 02:47:52.569123 sshd[1633]: Accepted publickey for core from 4.175.71.9 port 41152 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:47:52.570051 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:47:52.577165 systemd-logind[1492]: New session 2 of user core. Apr 28 02:47:52.589011 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 02:47:52.696782 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 28 02:47:52.701442 systemd[1]: sshd@1-10.230.12.190:22-4.175.71.9:41152.service: Deactivated successfully. Apr 28 02:47:52.704323 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 02:47:52.706758 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Apr 28 02:47:52.708577 systemd-logind[1492]: Removed session 2. Apr 28 02:47:52.729405 systemd[1]: Started sshd@2-10.230.12.190:22-4.175.71.9:41168.service - OpenSSH per-connection server daemon (4.175.71.9:41168). Apr 28 02:47:52.871992 sshd[1640]: Accepted publickey for core from 4.175.71.9 port 41168 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:47:52.879071 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:47:52.887682 systemd-logind[1492]: New session 3 of user core. Apr 28 02:47:52.893897 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 02:47:52.999910 sshd[1640]: pam_unix(sshd:session): session closed for user core Apr 28 02:47:53.005108 systemd[1]: sshd@2-10.230.12.190:22-4.175.71.9:41168.service: Deactivated successfully. Apr 28 02:47:53.007337 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 02:47:53.008417 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Apr 28 02:47:53.010000 systemd-logind[1492]: Removed session 3. Apr 28 02:47:53.172846 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 28 02:47:53.181998 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 28 02:47:53.182827 systemd-logind[1492]: New session 4 of user core. Apr 28 02:47:53.191991 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 02:47:53.203923 systemd-logind[1492]: New session 5 of user core. Apr 28 02:47:53.210890 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 02:47:54.540222 coreos-metadata[1482]: Apr 28 02:47:54.540 WARN failed to locate config-drive, using the metadata service API instead Apr 28 02:47:54.566044 coreos-metadata[1482]: Apr 28 02:47:54.565 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 28 02:47:54.573206 coreos-metadata[1482]: Apr 28 02:47:54.573 INFO Fetch failed with 404: resource not found Apr 28 02:47:54.573206 coreos-metadata[1482]: Apr 28 02:47:54.573 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 28 02:47:54.573687 coreos-metadata[1482]: Apr 28 02:47:54.573 INFO Fetch successful Apr 28 02:47:54.573826 coreos-metadata[1482]: Apr 28 02:47:54.573 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 28 02:47:54.586872 coreos-metadata[1482]: Apr 28 02:47:54.586 INFO Fetch successful Apr 28 02:47:54.586872 coreos-metadata[1482]: Apr 28 02:47:54.586 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 28 02:47:54.601386 coreos-metadata[1482]: Apr 28 02:47:54.601 INFO Fetch successful Apr 28 02:47:54.601386 coreos-metadata[1482]: Apr 28 02:47:54.601 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 28 02:47:54.616935 coreos-metadata[1482]: Apr 28 02:47:54.616 INFO Fetch successful Apr 28 02:47:54.616935 coreos-metadata[1482]: Apr 28 02:47:54.616 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 28 02:47:54.634316 coreos-metadata[1482]: Apr 28 02:47:54.634 INFO Fetch successful Apr 28 02:47:54.666143 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 28 02:47:54.667714 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 02:47:54.743229 coreos-metadata[1554]: Apr 28 02:47:54.743 WARN failed to locate config-drive, using the metadata service API instead Apr 28 02:47:54.764675 coreos-metadata[1554]: Apr 28 02:47:54.764 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 28 02:47:54.790995 coreos-metadata[1554]: Apr 28 02:47:54.790 INFO Fetch successful Apr 28 02:47:54.791154 coreos-metadata[1554]: Apr 28 02:47:54.791 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 28 02:47:54.818781 coreos-metadata[1554]: Apr 28 02:47:54.818 INFO Fetch successful Apr 28 02:47:54.826561 unknown[1554]: wrote ssh authorized keys file for user: core Apr 28 02:47:54.850624 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Apr 28 02:47:54.851427 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 28 02:47:54.854497 systemd[1]: Finished sshkeys.service. Apr 28 02:47:54.857939 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 02:47:54.858115 systemd[1]: Startup finished in 1.458s (kernel) + 14.922s (initrd) + 12.228s (userspace) = 28.609s. Apr 28 02:48:00.376558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 02:48:00.387961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:00.565484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:00.582212 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:48:00.710889 kubelet[1693]: E0428 02:48:00.710677 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:48:00.715112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:48:00.715405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:48:03.042046 systemd[1]: Started sshd@3-10.230.12.190:22-4.175.71.9:54214.service - OpenSSH per-connection server daemon (4.175.71.9:54214). Apr 28 02:48:03.169103 sshd[1701]: Accepted publickey for core from 4.175.71.9 port 54214 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:03.170101 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:03.176456 systemd-logind[1492]: New session 6 of user core. Apr 28 02:48:03.184865 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 02:48:03.291741 sshd[1701]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:03.296833 systemd[1]: sshd@3-10.230.12.190:22-4.175.71.9:54214.service: Deactivated successfully. Apr 28 02:48:03.299038 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 02:48:03.299874 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Apr 28 02:48:03.301510 systemd-logind[1492]: Removed session 6. Apr 28 02:48:03.326005 systemd[1]: Started sshd@4-10.230.12.190:22-4.175.71.9:54224.service - OpenSSH per-connection server daemon (4.175.71.9:54224). Apr 28 02:48:03.458847 sshd[1708]: Accepted publickey for core from 4.175.71.9 port 54224 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:03.461016 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:03.469108 systemd-logind[1492]: New session 7 of user core. Apr 28 02:48:03.474923 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 02:48:03.572982 sshd[1708]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:03.578669 systemd[1]: sshd@4-10.230.12.190:22-4.175.71.9:54224.service: Deactivated successfully. Apr 28 02:48:03.581191 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 02:48:03.582337 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Apr 28 02:48:03.583738 systemd-logind[1492]: Removed session 7. Apr 28 02:48:03.609030 systemd[1]: Started sshd@5-10.230.12.190:22-4.175.71.9:54240.service - OpenSSH per-connection server daemon (4.175.71.9:54240). Apr 28 02:48:03.733304 sshd[1715]: Accepted publickey for core from 4.175.71.9 port 54240 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:03.735366 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:03.741652 systemd-logind[1492]: New session 8 of user core. Apr 28 02:48:03.753982 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 02:48:03.856866 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:03.862038 systemd[1]: sshd@5-10.230.12.190:22-4.175.71.9:54240.service: Deactivated successfully. Apr 28 02:48:03.864306 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 02:48:03.865386 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Apr 28 02:48:03.866761 systemd-logind[1492]: Removed session 8. Apr 28 02:48:03.892051 systemd[1]: Started sshd@6-10.230.12.190:22-4.175.71.9:54252.service - OpenSSH per-connection server daemon (4.175.71.9:54252). Apr 28 02:48:04.020083 sshd[1722]: Accepted publickey for core from 4.175.71.9 port 54252 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:04.021011 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:04.027906 systemd-logind[1492]: New session 9 of user core. Apr 28 02:48:04.037876 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 02:48:04.146238 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 02:48:04.146754 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:48:04.164555 sudo[1725]: pam_unix(sudo:session): session closed for user root Apr 28 02:48:04.182206 sshd[1722]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:04.188254 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Apr 28 02:48:04.188931 systemd[1]: sshd@6-10.230.12.190:22-4.175.71.9:54252.service: Deactivated successfully. Apr 28 02:48:04.191527 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 02:48:04.192744 systemd-logind[1492]: Removed session 9. Apr 28 02:48:04.215055 systemd[1]: Started sshd@7-10.230.12.190:22-4.175.71.9:54254.service - OpenSSH per-connection server daemon (4.175.71.9:54254). Apr 28 02:48:04.344710 sshd[1730]: Accepted publickey for core from 4.175.71.9 port 54254 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:04.346832 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:04.353866 systemd-logind[1492]: New session 10 of user core. Apr 28 02:48:04.362872 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 02:48:04.454160 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 02:48:04.454693 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:48:04.460917 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 28 02:48:04.469513 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 02:48:04.470517 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:48:04.506982 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 02:48:04.509188 auditctl[1737]: No rules Apr 28 02:48:04.510026 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 02:48:04.510335 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 02:48:04.514369 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 02:48:04.564199 augenrules[1755]: No rules Apr 28 02:48:04.565887 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 02:48:04.567512 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 28 02:48:04.585021 sshd[1730]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:04.589893 systemd[1]: sshd@7-10.230.12.190:22-4.175.71.9:54254.service: Deactivated successfully. Apr 28 02:48:04.592309 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 02:48:04.593359 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Apr 28 02:48:04.594945 systemd-logind[1492]: Removed session 10. Apr 28 02:48:04.624022 systemd[1]: Started sshd@8-10.230.12.190:22-4.175.71.9:54268.service - OpenSSH per-connection server daemon (4.175.71.9:54268). Apr 28 02:48:04.747836 sshd[1763]: Accepted publickey for core from 4.175.71.9 port 54268 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:48:04.750085 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:48:04.757071 systemd-logind[1492]: New session 11 of user core. Apr 28 02:48:04.770862 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 02:48:04.861776 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 02:48:04.862329 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 02:48:05.357955 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 02:48:05.359115 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 02:48:05.826205 dockerd[1781]: time="2026-04-28T02:48:05.825972820Z" level=info msg="Starting up" Apr 28 02:48:05.970894 dockerd[1781]: time="2026-04-28T02:48:05.970563345Z" level=info msg="Loading containers: start." Apr 28 02:48:06.138692 kernel: Initializing XFRM netlink socket Apr 28 02:48:06.178382 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Apr 28 02:48:06.244981 systemd-networkd[1442]: docker0: Link UP Apr 28 02:48:06.278669 dockerd[1781]: time="2026-04-28T02:48:06.277631839Z" level=info msg="Loading containers: done." Apr 28 02:48:06.295383 dockerd[1781]: time="2026-04-28T02:48:06.295322369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 02:48:06.295558 dockerd[1781]: time="2026-04-28T02:48:06.295449859Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 02:48:06.295677 dockerd[1781]: time="2026-04-28T02:48:06.295623496Z" level=info msg="Daemon has completed initialization" Apr 28 02:48:06.335207 dockerd[1781]: time="2026-04-28T02:48:06.334210106Z" level=info msg="API listen on /run/docker.sock" Apr 28 02:48:06.334569 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 02:48:06.465693 systemd-timesyncd[1418]: Contacted time server [2a01:7e00::f03c:94ff:fee2:c52b]:123 (2.flatcar.pool.ntp.org). Apr 28 02:48:06.465795 systemd-timesyncd[1418]: Initial clock synchronization to Tue 2026-04-28 02:48:06.809975 UTC. Apr 28 02:48:07.040542 containerd[1523]: time="2026-04-28T02:48:07.039868731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 02:48:07.896345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705784992.mount: Deactivated successfully. Apr 28 02:48:10.284675 containerd[1523]: time="2026-04-28T02:48:10.284357805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:10.286167 containerd[1523]: time="2026-04-28T02:48:10.286109684Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193997" Apr 28 02:48:10.287257 containerd[1523]: time="2026-04-28T02:48:10.286704079Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:10.291086 containerd[1523]: time="2026-04-28T02:48:10.291020282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:10.293250 containerd[1523]: time="2026-04-28T02:48:10.292812932Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 3.25283144s" Apr 28 02:48:10.293250 containerd[1523]: time="2026-04-28T02:48:10.292897751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 02:48:10.295991 containerd[1523]: time="2026-04-28T02:48:10.295942819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 02:48:10.876961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 02:48:10.888995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:11.098964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:11.103637 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:48:11.173152 kubelet[1989]: E0428 02:48:11.172792 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:48:11.176173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:48:11.176489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:48:12.727076 containerd[1523]: time="2026-04-28T02:48:12.726135544Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171455" Apr 28 02:48:12.727076 containerd[1523]: time="2026-04-28T02:48:12.726240259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:12.728946 containerd[1523]: time="2026-04-28T02:48:12.728088058Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:12.734183 containerd[1523]: time="2026-04-28T02:48:12.734134955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:12.736188 containerd[1523]: time="2026-04-28T02:48:12.736140455Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 2.440142285s" Apr 28 02:48:12.736279 containerd[1523]: time="2026-04-28T02:48:12.736192176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 02:48:12.737591 containerd[1523]: time="2026-04-28T02:48:12.737555019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 02:48:14.637108 containerd[1523]: time="2026-04-28T02:48:14.636858217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:14.644086 containerd[1523]: time="2026-04-28T02:48:14.639447446Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289764" Apr 28 02:48:14.644086 containerd[1523]: time="2026-04-28T02:48:14.640910751Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:14.646682 containerd[1523]: time="2026-04-28T02:48:14.646515505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:14.649723 containerd[1523]: time="2026-04-28T02:48:14.648441183Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.910842353s" Apr 28 02:48:14.649723 containerd[1523]: time="2026-04-28T02:48:14.648558945Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 02:48:14.649723 containerd[1523]: time="2026-04-28T02:48:14.649483332Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 02:48:16.219244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890115292.mount: Deactivated successfully. Apr 28 02:48:16.966404 containerd[1523]: time="2026-04-28T02:48:16.966274535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:16.968856 containerd[1523]: time="2026-04-28T02:48:16.968789380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010719" Apr 28 02:48:16.969951 containerd[1523]: time="2026-04-28T02:48:16.969868655Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:16.973361 containerd[1523]: time="2026-04-28T02:48:16.973295766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:16.979494 containerd[1523]: time="2026-04-28T02:48:16.976710350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 2.327166335s" Apr 28 02:48:16.979494 containerd[1523]: time="2026-04-28T02:48:16.976799183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 02:48:16.980205 containerd[1523]: time="2026-04-28T02:48:16.980151863Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 02:48:17.543621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851563912.mount: Deactivated successfully. Apr 28 02:48:19.375721 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 28 02:48:20.684763 containerd[1523]: time="2026-04-28T02:48:20.684559648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:20.686937 containerd[1523]: time="2026-04-28T02:48:20.686837583Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 28 02:48:20.687730 containerd[1523]: time="2026-04-28T02:48:20.687682369Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:20.693380 containerd[1523]: time="2026-04-28T02:48:20.693341398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:20.695703 containerd[1523]: time="2026-04-28T02:48:20.695260438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.714905103s" Apr 28 02:48:20.695703 containerd[1523]: time="2026-04-28T02:48:20.695338768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 02:48:20.697508 containerd[1523]: time="2026-04-28T02:48:20.697475647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 02:48:21.243111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 02:48:21.252946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:21.261037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102899631.mount: Deactivated successfully. Apr 28 02:48:21.266410 containerd[1523]: time="2026-04-28T02:48:21.266367498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:21.268053 containerd[1523]: time="2026-04-28T02:48:21.267986825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 28 02:48:21.268316 containerd[1523]: time="2026-04-28T02:48:21.268237916Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:21.271346 containerd[1523]: time="2026-04-28T02:48:21.271301254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:21.274215 containerd[1523]: time="2026-04-28T02:48:21.273904878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 576.382859ms" Apr 28 02:48:21.274215 containerd[1523]: time="2026-04-28T02:48:21.273960623Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 02:48:21.276640 containerd[1523]: time="2026-04-28T02:48:21.276090197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 02:48:21.610896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:21.621171 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 02:48:21.800712 kubelet[2081]: E0428 02:48:21.800306 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 02:48:21.803060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 02:48:21.803307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 02:48:22.238630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651332813.mount: Deactivated successfully. Apr 28 02:48:24.774713 containerd[1523]: time="2026-04-28T02:48:24.773093981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:24.776326 containerd[1523]: time="2026-04-28T02:48:24.775742764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719434" Apr 28 02:48:24.777070 containerd[1523]: time="2026-04-28T02:48:24.777005410Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:24.782131 containerd[1523]: time="2026-04-28T02:48:24.782097321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:24.784818 containerd[1523]: time="2026-04-28T02:48:24.784195157Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.508061808s" Apr 28 02:48:24.784818 containerd[1523]: time="2026-04-28T02:48:24.784335427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 02:48:30.254545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:30.278759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:30.316226 systemd[1]: Reloading requested from client PID 2179 ('systemctl') (unit session-11.scope)... Apr 28 02:48:30.316266 systemd[1]: Reloading... Apr 28 02:48:30.519852 zram_generator::config[2214]: No configuration found. Apr 28 02:48:30.675107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:48:30.790396 systemd[1]: Reloading finished in 473 ms. Apr 28 02:48:30.858817 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 02:48:30.859546 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 02:48:30.860194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:30.866961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:31.019975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:31.035131 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:48:31.151243 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:48:31.151243 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:48:31.151243 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:48:31.152658 kubelet[2285]: I0428 02:48:31.152388 2285 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:48:31.833787 kubelet[2285]: I0428 02:48:31.833711 2285 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:48:31.833787 kubelet[2285]: I0428 02:48:31.833775 2285 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:48:31.834209 kubelet[2285]: I0428 02:48:31.834172 2285 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:48:31.875652 kubelet[2285]: E0428 02:48:31.875556 2285 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.12.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 02:48:31.880137 kubelet[2285]: I0428 02:48:31.879696 2285 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:48:31.888844 kubelet[2285]: E0428 02:48:31.888748 2285 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:48:31.888844 kubelet[2285]: I0428 02:48:31.888799 2285 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:48:31.901696 kubelet[2285]: I0428 02:48:31.901642 2285 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:48:31.905806 kubelet[2285]: I0428 02:48:31.905678 2285 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:48:31.907634 kubelet[2285]: I0428 02:48:31.905746 2285 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4dua5.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:48:31.907919 kubelet[2285]: I0428 02:48:31.907641 2285 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:48:31.907919 kubelet[2285]: I0428 02:48:31.907662 2285 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:48:31.907919 kubelet[2285]: I0428 02:48:31.907903 2285 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:48:31.914905 kubelet[2285]: I0428 02:48:31.914850 2285 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:48:31.915034 kubelet[2285]: I0428 02:48:31.914915 2285 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:48:31.915034 kubelet[2285]: I0428 02:48:31.914981 2285 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:48:31.922132 kubelet[2285]: I0428 02:48:31.921697 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:48:31.927150 kubelet[2285]: E0428 02:48:31.927113 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.12.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4dua5.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:48:31.929169 kubelet[2285]: E0428 02:48:31.927840 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.12.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:48:31.929169 kubelet[2285]: I0428 02:48:31.928206 2285 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:48:31.929169 kubelet[2285]: I0428 02:48:31.929068 2285 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:48:31.931638 kubelet[2285]: W0428 02:48:31.930012 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 02:48:31.939606 kubelet[2285]: I0428 02:48:31.939543 2285 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:48:31.939723 kubelet[2285]: I0428 02:48:31.939663 2285 server.go:1289] "Started kubelet" Apr 28 02:48:31.940138 kubelet[2285]: I0428 02:48:31.940013 2285 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:48:31.941540 kubelet[2285]: I0428 02:48:31.941481 2285 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:48:31.944362 kubelet[2285]: I0428 02:48:31.943815 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:48:31.944602 kubelet[2285]: I0428 02:48:31.944567 2285 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:48:31.946650 kubelet[2285]: E0428 02:48:31.944947 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.12.190:6443/api/v1/namespaces/default/events\": dial tcp 10.230.12.190:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-4dua5.gb1.brightbox.com.18aa656ab5a6250d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-4dua5.gb1.brightbox.com,UID:srv-4dua5.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-4dua5.gb1.brightbox.com,},FirstTimestamp:2026-04-28 02:48:31.939585293 +0000 UTC m=+0.897160402,LastTimestamp:2026-04-28 02:48:31.939585293 +0000 UTC m=+0.897160402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-4dua5.gb1.brightbox.com,}" Apr 28 02:48:31.950800 kubelet[2285]: I0428 02:48:31.950591 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:48:31.951608 kubelet[2285]: I0428 02:48:31.951577 2285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:48:31.954808 kubelet[2285]: I0428 02:48:31.951917 2285 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:48:31.954926 kubelet[2285]: E0428 02:48:31.954696 2285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4dua5.gb1.brightbox.com\" not found" Apr 28 02:48:31.957247 kubelet[2285]: I0428 02:48:31.956775 2285 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:48:31.957247 kubelet[2285]: I0428 02:48:31.956788 2285 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:48:31.957247 kubelet[2285]: E0428 02:48:31.956922 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4dua5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.190:6443: connect: connection refused" interval="200ms" Apr 28 02:48:31.957247 kubelet[2285]: I0428 02:48:31.957159 2285 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:48:31.958869 kubelet[2285]: E0428 02:48:31.957917 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.12.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:48:31.959186 kubelet[2285]: I0428 02:48:31.959160 2285 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:48:31.960195 kubelet[2285]: E0428 02:48:31.960156 2285 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 02:48:31.962086 kubelet[2285]: I0428 02:48:31.962060 2285 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:48:32.003173 kubelet[2285]: I0428 02:48:32.003069 2285 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:48:32.003444 kubelet[2285]: I0428 02:48:32.003285 2285 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:48:32.003867 kubelet[2285]: I0428 02:48:32.003559 2285 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:48:32.004351 kubelet[2285]: I0428 02:48:32.004105 2285 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:48:32.006698 kubelet[2285]: I0428 02:48:32.006672 2285 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:48:32.006872 kubelet[2285]: I0428 02:48:32.006851 2285 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:48:32.007421 kubelet[2285]: I0428 02:48:32.007008 2285 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:48:32.007421 kubelet[2285]: I0428 02:48:32.007042 2285 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:48:32.007421 kubelet[2285]: E0428 02:48:32.007103 2285 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:48:32.009226 kubelet[2285]: I0428 02:48:32.009204 2285 policy_none.go:49] "None policy: Start" Apr 28 02:48:32.009494 kubelet[2285]: I0428 02:48:32.009471 2285 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:48:32.009936 kubelet[2285]: E0428 02:48:32.009906 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.12.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 02:48:32.010470 kubelet[2285]: I0428 02:48:32.010191 2285 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:48:32.038382 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 02:48:32.052707 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 02:48:32.055976 kubelet[2285]: E0428 02:48:32.055945 2285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4dua5.gb1.brightbox.com\" not found" Apr 28 02:48:32.058293 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 02:48:32.068850 kubelet[2285]: E0428 02:48:32.068480 2285 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:48:32.068850 kubelet[2285]: I0428 02:48:32.068805 2285 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:48:32.069111 kubelet[2285]: I0428 02:48:32.069054 2285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:48:32.070385 kubelet[2285]: I0428 02:48:32.070332 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:48:32.072857 kubelet[2285]: E0428 02:48:32.072834 2285 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:48:32.073335 kubelet[2285]: E0428 02:48:32.073277 2285 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-4dua5.gb1.brightbox.com\" not found" Apr 28 02:48:32.127029 systemd[1]: Created slice kubepods-burstable-pod4606120d0bb5f35596b5dee7e555b60a.slice - libcontainer container kubepods-burstable-pod4606120d0bb5f35596b5dee7e555b60a.slice. Apr 28 02:48:32.153499 kubelet[2285]: E0428 02:48:32.152947 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.155733 systemd[1]: Created slice kubepods-burstable-pod0f3654654db757354f649e3f8f17c567.slice - libcontainer container kubepods-burstable-pod0f3654654db757354f649e3f8f17c567.slice. Apr 28 02:48:32.158071 kubelet[2285]: E0428 02:48:32.158000 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4dua5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.190:6443: connect: connection refused" interval="400ms" Apr 28 02:48:32.160796 kubelet[2285]: E0428 02:48:32.160756 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.165473 systemd[1]: Created slice kubepods-burstable-pod45753c1564fd770f88f9ee11244b8281.slice - libcontainer container kubepods-burstable-pod45753c1564fd770f88f9ee11244b8281.slice. Apr 28 02:48:32.168385 kubelet[2285]: E0428 02:48:32.168328 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.172788 kubelet[2285]: I0428 02:48:32.172339 2285 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.172991 kubelet[2285]: E0428 02:48:32.172960 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.12.190:6443/api/v1/nodes\": dial tcp 10.230.12.190:6443: connect: connection refused" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.249844 update_engine[1494]: I20260428 02:48:32.249694 1494 update_attempter.cc:509] Updating boot flags... Apr 28 02:48:32.258660 kubelet[2285]: I0428 02:48:32.258488 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-k8s-certs\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.258660 kubelet[2285]: I0428 02:48:32.258790 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.258660 kubelet[2285]: I0428 02:48:32.258849 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45753c1564fd770f88f9ee11244b8281-kubeconfig\") pod \"kube-scheduler-srv-4dua5.gb1.brightbox.com\" (UID: \"45753c1564fd770f88f9ee11244b8281\") " pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259217 kubelet[2285]: I0428 02:48:32.258886 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-k8s-certs\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259217 kubelet[2285]: I0428 02:48:32.258932 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259217 kubelet[2285]: I0428 02:48:32.259071 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-flexvolume-dir\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259217 kubelet[2285]: I0428 02:48:32.259104 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-kubeconfig\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259217 kubelet[2285]: I0428 02:48:32.259139 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-ca-certs\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.259447 kubelet[2285]: I0428 02:48:32.259172 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-ca-certs\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.306717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2329) Apr 28 02:48:32.386499 kubelet[2285]: I0428 02:48:32.386300 2285 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.387468 kubelet[2285]: E0428 02:48:32.386820 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.12.190:6443/api/v1/nodes\": dial tcp 10.230.12.190:6443: connect: connection refused" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.458328 containerd[1523]: time="2026-04-28T02:48:32.458202277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4dua5.gb1.brightbox.com,Uid:4606120d0bb5f35596b5dee7e555b60a,Namespace:kube-system,Attempt:0,}" Apr 28 02:48:32.524519 containerd[1523]: time="2026-04-28T02:48:32.523139140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4dua5.gb1.brightbox.com,Uid:45753c1564fd770f88f9ee11244b8281,Namespace:kube-system,Attempt:0,}" Apr 28 02:48:32.524904 containerd[1523]: time="2026-04-28T02:48:32.524866752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4dua5.gb1.brightbox.com,Uid:0f3654654db757354f649e3f8f17c567,Namespace:kube-system,Attempt:0,}" Apr 28 02:48:32.559464 kubelet[2285]: E0428 02:48:32.559405 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4dua5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.190:6443: connect: connection refused" interval="800ms" Apr 28 02:48:32.790707 kubelet[2285]: I0428 02:48:32.790569 2285 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.791427 kubelet[2285]: E0428 02:48:32.791231 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.12.190:6443/api/v1/nodes\": dial tcp 10.230.12.190:6443: connect: connection refused" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:32.816024 kubelet[2285]: E0428 02:48:32.815955 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.12.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 02:48:33.154102 kubelet[2285]: E0428 02:48:33.154019 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.12.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 02:48:33.291810 kubelet[2285]: E0428 02:48:33.291686 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.12.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 02:48:33.360874 kubelet[2285]: E0428 02:48:33.360786 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4dua5.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.190:6443: connect: connection refused" interval="1.6s" Apr 28 02:48:33.384632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606547258.mount: Deactivated successfully. Apr 28 02:48:33.393961 containerd[1523]: time="2026-04-28T02:48:33.393899533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:48:33.396039 containerd[1523]: time="2026-04-28T02:48:33.395741616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:48:33.402874 containerd[1523]: time="2026-04-28T02:48:33.402821254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:48:33.404050 containerd[1523]: time="2026-04-28T02:48:33.404008397Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:48:33.406241 containerd[1523]: time="2026-04-28T02:48:33.406074892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:48:33.407310 containerd[1523]: time="2026-04-28T02:48:33.407162120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 28 02:48:33.408879 containerd[1523]: time="2026-04-28T02:48:33.408797059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 02:48:33.410148 containerd[1523]: time="2026-04-28T02:48:33.410010698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 02:48:33.413245 containerd[1523]: time="2026-04-28T02:48:33.412892712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 953.876325ms" Apr 28 02:48:33.415643 containerd[1523]: time="2026-04-28T02:48:33.415478773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 890.366211ms" Apr 28 02:48:33.422397 containerd[1523]: time="2026-04-28T02:48:33.422321175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 899.059705ms" Apr 28 02:48:33.525851 kubelet[2285]: E0428 02:48:33.525768 2285 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.12.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4dua5.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 02:48:33.596271 kubelet[2285]: I0428 02:48:33.596232 2285 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:33.597199 kubelet[2285]: E0428 02:48:33.597166 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.12.190:6443/api/v1/nodes\": dial tcp 10.230.12.190:6443: connect: connection refused" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:33.642927 containerd[1523]: time="2026-04-28T02:48:33.642736500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:48:33.643455 containerd[1523]: time="2026-04-28T02:48:33.642962619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:48:33.643455 containerd[1523]: time="2026-04-28T02:48:33.642994169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.643897 containerd[1523]: time="2026-04-28T02:48:33.643769540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.649882 containerd[1523]: time="2026-04-28T02:48:33.649759887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:48:33.650034 containerd[1523]: time="2026-04-28T02:48:33.649848746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:48:33.650433 containerd[1523]: time="2026-04-28T02:48:33.650143517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.650433 containerd[1523]: time="2026-04-28T02:48:33.650361075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.652684 containerd[1523]: time="2026-04-28T02:48:33.652528745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:48:33.659838 containerd[1523]: time="2026-04-28T02:48:33.658693467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:48:33.659838 containerd[1523]: time="2026-04-28T02:48:33.659511019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.659838 containerd[1523]: time="2026-04-28T02:48:33.659692725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:33.703922 systemd[1]: Started cri-containerd-7246e39bafe47038318716783ae992f8739674516a38e98c2fd0d4ca74ff3e9c.scope - libcontainer container 7246e39bafe47038318716783ae992f8739674516a38e98c2fd0d4ca74ff3e9c. Apr 28 02:48:33.720905 systemd[1]: Started cri-containerd-510e351e349ca7578139dc93c433604247ce1b9d7fd7d72a13b2736f112e9333.scope - libcontainer container 510e351e349ca7578139dc93c433604247ce1b9d7fd7d72a13b2736f112e9333. Apr 28 02:48:33.727106 systemd[1]: Started cri-containerd-82cd43ef7613a764180c63c5e2556c7ea8eb0e47fe150407bf5cdd75c3f1359a.scope - libcontainer container 82cd43ef7613a764180c63c5e2556c7ea8eb0e47fe150407bf5cdd75c3f1359a. Apr 28 02:48:33.816701 containerd[1523]: time="2026-04-28T02:48:33.816076812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4dua5.gb1.brightbox.com,Uid:4606120d0bb5f35596b5dee7e555b60a,Namespace:kube-system,Attempt:0,} returns sandbox id \"510e351e349ca7578139dc93c433604247ce1b9d7fd7d72a13b2736f112e9333\"" Apr 28 02:48:33.837589 containerd[1523]: time="2026-04-28T02:48:33.837251055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4dua5.gb1.brightbox.com,Uid:0f3654654db757354f649e3f8f17c567,Namespace:kube-system,Attempt:0,} returns sandbox id \"7246e39bafe47038318716783ae992f8739674516a38e98c2fd0d4ca74ff3e9c\"" Apr 28 02:48:33.857507 containerd[1523]: time="2026-04-28T02:48:33.857324359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4dua5.gb1.brightbox.com,Uid:45753c1564fd770f88f9ee11244b8281,Namespace:kube-system,Attempt:0,} returns sandbox id \"82cd43ef7613a764180c63c5e2556c7ea8eb0e47fe150407bf5cdd75c3f1359a\"" Apr 28 02:48:33.919335 kubelet[2285]: E0428 02:48:33.919143 2285 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.12.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.190:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 02:48:33.935627 containerd[1523]: time="2026-04-28T02:48:33.935408251Z" level=info msg="CreateContainer within sandbox \"510e351e349ca7578139dc93c433604247ce1b9d7fd7d72a13b2736f112e9333\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 02:48:33.970519 containerd[1523]: time="2026-04-28T02:48:33.970387676Z" level=info msg="CreateContainer within sandbox \"7246e39bafe47038318716783ae992f8739674516a38e98c2fd0d4ca74ff3e9c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 02:48:33.973035 containerd[1523]: time="2026-04-28T02:48:33.972673892Z" level=info msg="CreateContainer within sandbox \"82cd43ef7613a764180c63c5e2556c7ea8eb0e47fe150407bf5cdd75c3f1359a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 02:48:33.984391 containerd[1523]: time="2026-04-28T02:48:33.984332580Z" level=info msg="CreateContainer within sandbox \"510e351e349ca7578139dc93c433604247ce1b9d7fd7d72a13b2736f112e9333\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c651c165e16ce0b67f351b280055263ff471e910b223293ca1a01637e49c5972\"" Apr 28 02:48:33.986091 containerd[1523]: time="2026-04-28T02:48:33.985794242Z" level=info msg="StartContainer for \"c651c165e16ce0b67f351b280055263ff471e910b223293ca1a01637e49c5972\"" Apr 28 02:48:33.990266 containerd[1523]: time="2026-04-28T02:48:33.990224625Z" level=info msg="CreateContainer within sandbox \"7246e39bafe47038318716783ae992f8739674516a38e98c2fd0d4ca74ff3e9c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da0807254206678b120fd7be1e7ca7db7fd583d53a8de6852812f7c0a81af5a3\"" Apr 28 02:48:33.991696 containerd[1523]: time="2026-04-28T02:48:33.991639211Z" level=info msg="StartContainer for \"da0807254206678b120fd7be1e7ca7db7fd583d53a8de6852812f7c0a81af5a3\"" Apr 28 02:48:34.042893 systemd[1]: Started cri-containerd-da0807254206678b120fd7be1e7ca7db7fd583d53a8de6852812f7c0a81af5a3.scope - libcontainer container da0807254206678b120fd7be1e7ca7db7fd583d53a8de6852812f7c0a81af5a3. Apr 28 02:48:34.052819 systemd[1]: Started cri-containerd-c651c165e16ce0b67f351b280055263ff471e910b223293ca1a01637e49c5972.scope - libcontainer container c651c165e16ce0b67f351b280055263ff471e910b223293ca1a01637e49c5972. Apr 28 02:48:34.084135 containerd[1523]: time="2026-04-28T02:48:34.084029597Z" level=info msg="CreateContainer within sandbox \"82cd43ef7613a764180c63c5e2556c7ea8eb0e47fe150407bf5cdd75c3f1359a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44316a590310efdbe9183c083738378623413b2501bf6a19faee2152762e2133\"" Apr 28 02:48:34.085435 containerd[1523]: time="2026-04-28T02:48:34.085398645Z" level=info msg="StartContainer for \"44316a590310efdbe9183c083738378623413b2501bf6a19faee2152762e2133\"" Apr 28 02:48:34.155549 systemd[1]: Started cri-containerd-44316a590310efdbe9183c083738378623413b2501bf6a19faee2152762e2133.scope - libcontainer container 44316a590310efdbe9183c083738378623413b2501bf6a19faee2152762e2133. Apr 28 02:48:34.162003 containerd[1523]: time="2026-04-28T02:48:34.161952003Z" level=info msg="StartContainer for \"c651c165e16ce0b67f351b280055263ff471e910b223293ca1a01637e49c5972\" returns successfully" Apr 28 02:48:34.166973 containerd[1523]: time="2026-04-28T02:48:34.166874420Z" level=info msg="StartContainer for \"da0807254206678b120fd7be1e7ca7db7fd583d53a8de6852812f7c0a81af5a3\" returns successfully" Apr 28 02:48:34.250767 containerd[1523]: time="2026-04-28T02:48:34.249897386Z" level=info msg="StartContainer for \"44316a590310efdbe9183c083738378623413b2501bf6a19faee2152762e2133\" returns successfully" Apr 28 02:48:35.042643 kubelet[2285]: E0428 02:48:35.040351 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:35.042643 kubelet[2285]: E0428 02:48:35.041868 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:35.049257 kubelet[2285]: E0428 02:48:35.049021 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:35.204964 kubelet[2285]: I0428 02:48:35.204898 2285 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.052353 kubelet[2285]: E0428 02:48:36.052299 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.055736 kubelet[2285]: E0428 02:48:36.054644 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.055736 kubelet[2285]: E0428 02:48:36.054963 2285 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.424484 kubelet[2285]: E0428 02:48:36.424381 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-4dua5.gb1.brightbox.com\" not found" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.503031 kubelet[2285]: I0428 02:48:36.502786 2285 kubelet_node_status.go:78] "Successfully registered node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.503031 kubelet[2285]: E0428 02:48:36.502844 2285 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-4dua5.gb1.brightbox.com\": node \"srv-4dua5.gb1.brightbox.com\" not found" Apr 28 02:48:36.558995 kubelet[2285]: I0428 02:48:36.558957 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.624735 kubelet[2285]: E0428 02:48:36.624355 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.624735 kubelet[2285]: I0428 02:48:36.624406 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.631329 kubelet[2285]: E0428 02:48:36.631284 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.631329 kubelet[2285]: I0428 02:48:36.631324 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.637181 kubelet[2285]: E0428 02:48:36.637127 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:36.929482 kubelet[2285]: I0428 02:48:36.929098 2285 apiserver.go:52] "Watching apiserver" Apr 28 02:48:36.957425 kubelet[2285]: I0428 02:48:36.957358 2285 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:48:37.050188 kubelet[2285]: I0428 02:48:37.049742 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:37.052978 kubelet[2285]: I0428 02:48:37.051219 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:37.052978 kubelet[2285]: I0428 02:48:37.051760 2285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:37.054921 kubelet[2285]: E0428 02:48:37.054487 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:37.054921 kubelet[2285]: E0428 02:48:37.054561 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:37.054921 kubelet[2285]: E0428 02:48:37.054775 2285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4dua5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:38.596356 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-11.scope)... Apr 28 02:48:38.596401 systemd[1]: Reloading... Apr 28 02:48:38.721817 zram_generator::config[2620]: No configuration found. Apr 28 02:48:38.924938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 02:48:39.063413 systemd[1]: Reloading finished in 466 ms. Apr 28 02:48:39.129452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:39.144506 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 02:48:39.145056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:39.145202 systemd[1]: kubelet.service: Consumed 1.426s CPU time, 131.6M memory peak, 0B memory swap peak. Apr 28 02:48:39.154094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 02:48:39.412967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 02:48:39.416894 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 02:48:39.607475 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:48:39.607475 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 02:48:39.607475 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 02:48:39.608153 kubelet[2688]: I0428 02:48:39.607567 2688 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 02:48:39.626105 kubelet[2688]: I0428 02:48:39.624244 2688 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 02:48:39.626105 kubelet[2688]: I0428 02:48:39.624303 2688 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 02:48:39.626105 kubelet[2688]: I0428 02:48:39.625055 2688 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 02:48:39.629126 kubelet[2688]: I0428 02:48:39.629081 2688 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 02:48:39.643036 kubelet[2688]: I0428 02:48:39.642274 2688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 02:48:39.653536 kubelet[2688]: E0428 02:48:39.652379 2688 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 02:48:39.653536 kubelet[2688]: I0428 02:48:39.652424 2688 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 02:48:39.664215 kubelet[2688]: I0428 02:48:39.664053 2688 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 02:48:39.665547 kubelet[2688]: I0428 02:48:39.664954 2688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 02:48:39.666296 kubelet[2688]: I0428 02:48:39.665080 2688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4dua5.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 02:48:39.666296 kubelet[2688]: I0428 02:48:39.665912 2688 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 02:48:39.666296 kubelet[2688]: I0428 02:48:39.665933 2688 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 02:48:39.666296 kubelet[2688]: I0428 02:48:39.666139 2688 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:48:39.668934 kubelet[2688]: I0428 02:48:39.666665 2688 kubelet.go:480] "Attempting to sync node with API server" Apr 28 02:48:39.668934 kubelet[2688]: I0428 02:48:39.667711 2688 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 02:48:39.668934 kubelet[2688]: I0428 02:48:39.667777 2688 kubelet.go:386] "Adding apiserver pod source" Apr 28 02:48:39.668934 kubelet[2688]: I0428 02:48:39.667809 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 02:48:39.675468 kubelet[2688]: I0428 02:48:39.675031 2688 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 02:48:39.678491 kubelet[2688]: I0428 02:48:39.677593 2688 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 02:48:39.694392 kubelet[2688]: I0428 02:48:39.693930 2688 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 02:48:39.694392 kubelet[2688]: I0428 02:48:39.694018 2688 server.go:1289] "Started kubelet" Apr 28 02:48:39.703954 kubelet[2688]: I0428 02:48:39.702883 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 02:48:39.709649 kubelet[2688]: I0428 02:48:39.708693 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 02:48:39.715043 kubelet[2688]: I0428 02:48:39.704193 2688 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 02:48:39.736085 kubelet[2688]: I0428 02:48:39.736024 2688 server.go:317] "Adding debug handlers to kubelet server" Apr 28 02:48:39.744159 kubelet[2688]: I0428 02:48:39.714871 2688 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 02:48:39.744648 kubelet[2688]: E0428 02:48:39.715160 2688 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4dua5.gb1.brightbox.com\" not found" Apr 28 02:48:39.744894 kubelet[2688]: I0428 02:48:39.704435 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 02:48:39.747606 kubelet[2688]: I0428 02:48:39.747528 2688 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 02:48:39.758747 kubelet[2688]: I0428 02:48:39.714849 2688 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 02:48:39.760466 kubelet[2688]: I0428 02:48:39.760440 2688 factory.go:223] Registration of the systemd container factory successfully Apr 28 02:48:39.760845 kubelet[2688]: I0428 02:48:39.760798 2688 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 02:48:39.763152 kubelet[2688]: I0428 02:48:39.762269 2688 reconciler.go:26] "Reconciler: start to sync state" Apr 28 02:48:39.769102 kubelet[2688]: I0428 02:48:39.768074 2688 factory.go:223] Registration of the containerd container factory successfully Apr 28 02:48:39.784149 kubelet[2688]: I0428 02:48:39.783406 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 02:48:39.785359 kubelet[2688]: I0428 02:48:39.785236 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 02:48:39.785359 kubelet[2688]: I0428 02:48:39.785277 2688 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 02:48:39.785359 kubelet[2688]: I0428 02:48:39.785317 2688 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 02:48:39.785359 kubelet[2688]: I0428 02:48:39.785332 2688 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 02:48:39.785580 kubelet[2688]: E0428 02:48:39.785402 2688 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 02:48:39.885742 kubelet[2688]: E0428 02:48:39.885513 2688 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 02:48:39.908371 kubelet[2688]: I0428 02:48:39.908193 2688 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.908312 2688 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.908639 2688 state_mem.go:36] "Initialized new in-memory state store" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.908921 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.908941 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.908999 2688 policy_none.go:49] "None policy: Start" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.909032 2688 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.909077 2688 state_mem.go:35] "Initializing new in-memory state store" Apr 28 02:48:39.909665 kubelet[2688]: I0428 02:48:39.909335 2688 state_mem.go:75] "Updated machine memory state" Apr 28 02:48:39.924770 kubelet[2688]: E0428 02:48:39.923454 2688 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 02:48:39.924770 kubelet[2688]: I0428 02:48:39.923910 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 02:48:39.924770 kubelet[2688]: I0428 02:48:39.923970 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 02:48:39.928211 kubelet[2688]: I0428 02:48:39.926435 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 02:48:39.937998 kubelet[2688]: E0428 02:48:39.937966 2688 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 02:48:40.084677 kubelet[2688]: I0428 02:48:40.082883 2688 kubelet_node_status.go:75] "Attempting to register node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.087813 kubelet[2688]: I0428 02:48:40.087016 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.087813 kubelet[2688]: I0428 02:48:40.087418 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.092400 kubelet[2688]: I0428 02:48:40.092207 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.107259 kubelet[2688]: I0428 02:48:40.107208 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 28 02:48:40.111637 kubelet[2688]: I0428 02:48:40.110707 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 28 02:48:40.112631 kubelet[2688]: I0428 02:48:40.112579 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 28 02:48:40.113418 kubelet[2688]: I0428 02:48:40.113361 2688 kubelet_node_status.go:124] "Node was previously registered" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.113573 kubelet[2688]: I0428 02:48:40.113527 2688 kubelet_node_status.go:78] "Successfully registered node" node="srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166313 kubelet[2688]: I0428 02:48:40.165933 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166313 kubelet[2688]: I0428 02:48:40.166020 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-ca-certs\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166313 kubelet[2688]: I0428 02:48:40.166119 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-k8s-certs\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166313 kubelet[2688]: I0428 02:48:40.166156 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45753c1564fd770f88f9ee11244b8281-kubeconfig\") pod \"kube-scheduler-srv-4dua5.gb1.brightbox.com\" (UID: \"45753c1564fd770f88f9ee11244b8281\") " pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166313 kubelet[2688]: I0428 02:48:40.166184 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-k8s-certs\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166965 kubelet[2688]: I0428 02:48:40.166250 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-flexvolume-dir\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166965 kubelet[2688]: I0428 02:48:40.166293 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-kubeconfig\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166965 kubelet[2688]: I0428 02:48:40.166324 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f3654654db757354f649e3f8f17c567-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4dua5.gb1.brightbox.com\" (UID: \"0f3654654db757354f649e3f8f17c567\") " pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.166965 kubelet[2688]: I0428 02:48:40.166380 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4606120d0bb5f35596b5dee7e555b60a-ca-certs\") pod \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" (UID: \"4606120d0bb5f35596b5dee7e555b60a\") " pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.691285 kubelet[2688]: I0428 02:48:40.691207 2688 apiserver.go:52] "Watching apiserver" Apr 28 02:48:40.745701 kubelet[2688]: I0428 02:48:40.745148 2688 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 02:48:40.839669 kubelet[2688]: I0428 02:48:40.837246 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.858847 kubelet[2688]: I0428 02:48:40.857598 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 28 02:48:40.858847 kubelet[2688]: E0428 02:48:40.857763 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4dua5.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" Apr 28 02:48:40.909324 kubelet[2688]: I0428 02:48:40.908135 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-4dua5.gb1.brightbox.com" podStartSLOduration=0.90794746 podStartE2EDuration="907.94746ms" podCreationTimestamp="2026-04-28 02:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:48:40.907320378 +0000 UTC m=+1.415234330" watchObservedRunningTime="2026-04-28 02:48:40.90794746 +0000 UTC m=+1.415861400" Apr 28 02:48:40.979501 kubelet[2688]: I0428 02:48:40.978517 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-4dua5.gb1.brightbox.com" podStartSLOduration=0.978478451 podStartE2EDuration="978.478451ms" podCreationTimestamp="2026-04-28 02:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:48:40.978256062 +0000 UTC m=+1.486170016" watchObservedRunningTime="2026-04-28 02:48:40.978478451 +0000 UTC m=+1.486392410" Apr 28 02:48:40.981004 kubelet[2688]: I0428 02:48:40.980767 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-4dua5.gb1.brightbox.com" podStartSLOduration=0.980725233 podStartE2EDuration="980.725233ms" podCreationTimestamp="2026-04-28 02:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:48:40.929685636 +0000 UTC m=+1.437599596" watchObservedRunningTime="2026-04-28 02:48:40.980725233 +0000 UTC m=+1.488639188" Apr 28 02:48:44.254734 kubelet[2688]: I0428 02:48:44.254595 2688 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 02:48:44.256330 containerd[1523]: time="2026-04-28T02:48:44.255469108Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 02:48:44.258201 kubelet[2688]: I0428 02:48:44.257007 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 02:48:45.346481 systemd[1]: Created slice kubepods-besteffort-podd21f8e0c_c292_43cf_9294_dbc75189560f.slice - libcontainer container kubepods-besteffort-podd21f8e0c_c292_43cf_9294_dbc75189560f.slice. Apr 28 02:48:45.399271 kubelet[2688]: I0428 02:48:45.399164 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d21f8e0c-c292-43cf-9294-dbc75189560f-kube-proxy\") pod \"kube-proxy-6t4bc\" (UID: \"d21f8e0c-c292-43cf-9294-dbc75189560f\") " pod="kube-system/kube-proxy-6t4bc" Apr 28 02:48:45.399271 kubelet[2688]: I0428 02:48:45.399250 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d21f8e0c-c292-43cf-9294-dbc75189560f-xtables-lock\") pod \"kube-proxy-6t4bc\" (UID: \"d21f8e0c-c292-43cf-9294-dbc75189560f\") " pod="kube-system/kube-proxy-6t4bc" Apr 28 02:48:45.400145 kubelet[2688]: I0428 02:48:45.399296 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d21f8e0c-c292-43cf-9294-dbc75189560f-lib-modules\") pod \"kube-proxy-6t4bc\" (UID: \"d21f8e0c-c292-43cf-9294-dbc75189560f\") " pod="kube-system/kube-proxy-6t4bc" Apr 28 02:48:45.400145 kubelet[2688]: I0428 02:48:45.399333 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdjq\" (UniqueName: \"kubernetes.io/projected/d21f8e0c-c292-43cf-9294-dbc75189560f-kube-api-access-gxdjq\") pod \"kube-proxy-6t4bc\" (UID: \"d21f8e0c-c292-43cf-9294-dbc75189560f\") " pod="kube-system/kube-proxy-6t4bc" Apr 28 02:48:45.548847 systemd[1]: Created slice kubepods-besteffort-pod3adaf18e_cfab_4985_a176_5b7ce06221da.slice - libcontainer container kubepods-besteffort-pod3adaf18e_cfab_4985_a176_5b7ce06221da.slice. Apr 28 02:48:45.601814 kubelet[2688]: I0428 02:48:45.601456 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfmx\" (UniqueName: \"kubernetes.io/projected/3adaf18e-cfab-4985-a176-5b7ce06221da-kube-api-access-psfmx\") pod \"tigera-operator-8458958b4d-mvscn\" (UID: \"3adaf18e-cfab-4985-a176-5b7ce06221da\") " pod="tigera-operator/tigera-operator-8458958b4d-mvscn" Apr 28 02:48:45.601814 kubelet[2688]: I0428 02:48:45.601527 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3adaf18e-cfab-4985-a176-5b7ce06221da-var-lib-calico\") pod \"tigera-operator-8458958b4d-mvscn\" (UID: \"3adaf18e-cfab-4985-a176-5b7ce06221da\") " pod="tigera-operator/tigera-operator-8458958b4d-mvscn" Apr 28 02:48:45.663245 containerd[1523]: time="2026-04-28T02:48:45.662415596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6t4bc,Uid:d21f8e0c-c292-43cf-9294-dbc75189560f,Namespace:kube-system,Attempt:0,}" Apr 28 02:48:45.727804 containerd[1523]: time="2026-04-28T02:48:45.726584521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:48:45.727804 containerd[1523]: time="2026-04-28T02:48:45.727370591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:48:45.727804 containerd[1523]: time="2026-04-28T02:48:45.727459120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:45.728355 containerd[1523]: time="2026-04-28T02:48:45.728171161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:45.768915 systemd[1]: Started cri-containerd-5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5.scope - libcontainer container 5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5. Apr 28 02:48:45.812595 containerd[1523]: time="2026-04-28T02:48:45.812418262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6t4bc,Uid:d21f8e0c-c292-43cf-9294-dbc75189560f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5\"" Apr 28 02:48:45.821805 containerd[1523]: time="2026-04-28T02:48:45.821333245Z" level=info msg="CreateContainer within sandbox \"5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 02:48:45.839551 containerd[1523]: time="2026-04-28T02:48:45.839462487Z" level=info msg="CreateContainer within sandbox \"5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05a9412486b079f296521596afaa506ba9989236455e8942a263112adfd2170c\"" Apr 28 02:48:45.841002 containerd[1523]: time="2026-04-28T02:48:45.840381392Z" level=info msg="StartContainer for \"05a9412486b079f296521596afaa506ba9989236455e8942a263112adfd2170c\"" Apr 28 02:48:45.856431 containerd[1523]: time="2026-04-28T02:48:45.855569084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-mvscn,Uid:3adaf18e-cfab-4985-a176-5b7ce06221da,Namespace:tigera-operator,Attempt:0,}" Apr 28 02:48:45.926300 systemd[1]: Started cri-containerd-05a9412486b079f296521596afaa506ba9989236455e8942a263112adfd2170c.scope - libcontainer container 05a9412486b079f296521596afaa506ba9989236455e8942a263112adfd2170c. Apr 28 02:48:45.941431 containerd[1523]: time="2026-04-28T02:48:45.941240638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:48:45.942174 containerd[1523]: time="2026-04-28T02:48:45.941349260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:48:45.942174 containerd[1523]: time="2026-04-28T02:48:45.941418485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:45.942174 containerd[1523]: time="2026-04-28T02:48:45.941607209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:48:45.979892 systemd[1]: Started cri-containerd-02f099e8a653f0a19386ffd7279b686ed439f2e1849465293c58013570fbea86.scope - libcontainer container 02f099e8a653f0a19386ffd7279b686ed439f2e1849465293c58013570fbea86. Apr 28 02:48:46.027107 containerd[1523]: time="2026-04-28T02:48:46.027051857Z" level=info msg="StartContainer for \"05a9412486b079f296521596afaa506ba9989236455e8942a263112adfd2170c\" returns successfully" Apr 28 02:48:46.076495 containerd[1523]: time="2026-04-28T02:48:46.076400754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-mvscn,Uid:3adaf18e-cfab-4985-a176-5b7ce06221da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"02f099e8a653f0a19386ffd7279b686ed439f2e1849465293c58013570fbea86\"" Apr 28 02:48:46.081082 containerd[1523]: time="2026-04-28T02:48:46.080345837Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 28 02:48:46.538724 systemd[1]: run-containerd-runc-k8s.io-5dcf4ddf9c5cbad3ee48ce45d78dec6c752c7b0f49a0d0a86f03693f056f5fc5-runc.n0km56.mount: Deactivated successfully. Apr 28 02:48:47.909431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284508534.mount: Deactivated successfully. Apr 28 02:48:47.985486 kubelet[2688]: I0428 02:48:47.984375 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6t4bc" podStartSLOduration=2.9843478279999998 podStartE2EDuration="2.984347828s" podCreationTimestamp="2026-04-28 02:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:48:46.877076254 +0000 UTC m=+7.384990208" watchObservedRunningTime="2026-04-28 02:48:47.984347828 +0000 UTC m=+8.492261784" Apr 28 02:48:49.897528 containerd[1523]: time="2026-04-28T02:48:49.895883521Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:49.898728 containerd[1523]: time="2026-04-28T02:48:49.898505615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 28 02:48:49.902346 containerd[1523]: time="2026-04-28T02:48:49.902301349Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:49.907049 containerd[1523]: time="2026-04-28T02:48:49.905495722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:48:49.907049 containerd[1523]: time="2026-04-28T02:48:49.906907484Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 3.826507486s" Apr 28 02:48:49.907049 containerd[1523]: time="2026-04-28T02:48:49.907000444Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 28 02:48:49.914125 containerd[1523]: time="2026-04-28T02:48:49.914076281Z" level=info msg="CreateContainer within sandbox \"02f099e8a653f0a19386ffd7279b686ed439f2e1849465293c58013570fbea86\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 28 02:48:49.932264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674855870.mount: Deactivated successfully. Apr 28 02:48:49.935030 containerd[1523]: time="2026-04-28T02:48:49.934882211Z" level=info msg="CreateContainer within sandbox \"02f099e8a653f0a19386ffd7279b686ed439f2e1849465293c58013570fbea86\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5dec7331125dc15156b3dd1fcdb0c4efc4270d22c717db6708b78db55cb70a73\"" Apr 28 02:48:49.936111 containerd[1523]: time="2026-04-28T02:48:49.936017484Z" level=info msg="StartContainer for \"5dec7331125dc15156b3dd1fcdb0c4efc4270d22c717db6708b78db55cb70a73\"" Apr 28 02:48:49.995841 systemd[1]: Started cri-containerd-5dec7331125dc15156b3dd1fcdb0c4efc4270d22c717db6708b78db55cb70a73.scope - libcontainer container 5dec7331125dc15156b3dd1fcdb0c4efc4270d22c717db6708b78db55cb70a73. Apr 28 02:48:50.044209 containerd[1523]: time="2026-04-28T02:48:50.043883508Z" level=info msg="StartContainer for \"5dec7331125dc15156b3dd1fcdb0c4efc4270d22c717db6708b78db55cb70a73\" returns successfully" Apr 28 02:48:52.509012 kubelet[2688]: I0428 02:48:52.508652 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8458958b4d-mvscn" podStartSLOduration=3.675674812 podStartE2EDuration="7.50505193s" podCreationTimestamp="2026-04-28 02:48:45 +0000 UTC" firstStartedPulling="2026-04-28 02:48:46.079155489 +0000 UTC m=+6.587069431" lastFinishedPulling="2026-04-28 02:48:49.908532595 +0000 UTC m=+10.416446549" observedRunningTime="2026-04-28 02:48:50.895389883 +0000 UTC m=+11.403303855" watchObservedRunningTime="2026-04-28 02:48:52.50505193 +0000 UTC m=+13.012965879" Apr 28 02:48:58.317316 sudo[1766]: pam_unix(sudo:session): session closed for user root Apr 28 02:48:58.340693 sshd[1763]: pam_unix(sshd:session): session closed for user core Apr 28 02:48:58.354272 systemd[1]: sshd@8-10.230.12.190:22-4.175.71.9:54268.service: Deactivated successfully. Apr 28 02:48:58.361176 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 02:48:58.361504 systemd[1]: session-11.scope: Consumed 8.646s CPU time, 149.0M memory peak, 0B memory swap peak. Apr 28 02:48:58.362473 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Apr 28 02:48:58.364881 systemd-logind[1492]: Removed session 11. Apr 28 02:49:00.667098 systemd[1]: Created slice kubepods-besteffort-pod403d0684_93af_4489_b3b8_6c36544bdddf.slice - libcontainer container kubepods-besteffort-pod403d0684_93af_4489_b3b8_6c36544bdddf.slice. Apr 28 02:49:00.751551 kubelet[2688]: I0428 02:49:00.751459 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rw7\" (UniqueName: \"kubernetes.io/projected/403d0684-93af-4489-b3b8-6c36544bdddf-kube-api-access-d6rw7\") pod \"calico-typha-54d999b7d5-6tx9r\" (UID: \"403d0684-93af-4489-b3b8-6c36544bdddf\") " pod="calico-system/calico-typha-54d999b7d5-6tx9r" Apr 28 02:49:00.751551 kubelet[2688]: I0428 02:49:00.751559 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/403d0684-93af-4489-b3b8-6c36544bdddf-typha-certs\") pod \"calico-typha-54d999b7d5-6tx9r\" (UID: \"403d0684-93af-4489-b3b8-6c36544bdddf\") " pod="calico-system/calico-typha-54d999b7d5-6tx9r" Apr 28 02:49:00.753786 kubelet[2688]: I0428 02:49:00.751599 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/403d0684-93af-4489-b3b8-6c36544bdddf-tigera-ca-bundle\") pod \"calico-typha-54d999b7d5-6tx9r\" (UID: \"403d0684-93af-4489-b3b8-6c36544bdddf\") " pod="calico-system/calico-typha-54d999b7d5-6tx9r" Apr 28 02:49:00.857912 systemd[1]: Created slice kubepods-besteffort-pod6552ae9a_4092_41fb_87e0_be803f0b702b.slice - libcontainer container kubepods-besteffort-pod6552ae9a_4092_41fb_87e0_be803f0b702b.slice. Apr 28 02:49:00.953398 kubelet[2688]: I0428 02:49:00.953064 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-sys-fs\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.953398 kubelet[2688]: I0428 02:49:00.953126 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-xtables-lock\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.953398 kubelet[2688]: I0428 02:49:00.953221 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-nodeproc\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.953398 kubelet[2688]: I0428 02:49:00.953275 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-var-run-calico\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.953398 kubelet[2688]: I0428 02:49:00.953322 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-var-lib-calico\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.953922 kubelet[2688]: I0428 02:49:00.953348 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-lib-modules\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.955880 kubelet[2688]: I0428 02:49:00.954422 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-bpffs\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.955880 kubelet[2688]: I0428 02:49:00.954568 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6552ae9a-4092-41fb-87e0-be803f0b702b-node-certs\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.955880 kubelet[2688]: I0428 02:49:00.954607 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-policysync\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.955880 kubelet[2688]: I0428 02:49:00.954671 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjm8g\" (UniqueName: \"kubernetes.io/projected/6552ae9a-4092-41fb-87e0-be803f0b702b-kube-api-access-mjm8g\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.955880 kubelet[2688]: I0428 02:49:00.954793 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-cni-bin-dir\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.956820 kubelet[2688]: I0428 02:49:00.954908 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-cni-net-dir\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.956820 kubelet[2688]: I0428 02:49:00.954970 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-flexvol-driver-host\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.956820 kubelet[2688]: I0428 02:49:00.955005 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6552ae9a-4092-41fb-87e0-be803f0b702b-tigera-ca-bundle\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.956820 kubelet[2688]: I0428 02:49:00.955723 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6552ae9a-4092-41fb-87e0-be803f0b702b-cni-log-dir\") pod \"calico-node-clbnm\" (UID: \"6552ae9a-4092-41fb-87e0-be803f0b702b\") " pod="calico-system/calico-node-clbnm" Apr 28 02:49:00.963512 kubelet[2688]: E0428 02:49:00.963002 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:00.978421 containerd[1523]: time="2026-04-28T02:49:00.977868875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d999b7d5-6tx9r,Uid:403d0684-93af-4489-b3b8-6c36544bdddf,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:01.059780 kubelet[2688]: I0428 02:49:01.058273 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85kl\" (UniqueName: \"kubernetes.io/projected/b51c0c3e-fb85-4791-a4da-124042c0f74d-kube-api-access-c85kl\") pod \"csi-node-driver-r758r\" (UID: \"b51c0c3e-fb85-4791-a4da-124042c0f74d\") " pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:01.059780 kubelet[2688]: I0428 02:49:01.058684 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b51c0c3e-fb85-4791-a4da-124042c0f74d-registration-dir\") pod \"csi-node-driver-r758r\" (UID: \"b51c0c3e-fb85-4791-a4da-124042c0f74d\") " pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:01.065682 kubelet[2688]: I0428 02:49:01.063875 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b51c0c3e-fb85-4791-a4da-124042c0f74d-kubelet-dir\") pod \"csi-node-driver-r758r\" (UID: \"b51c0c3e-fb85-4791-a4da-124042c0f74d\") " pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:01.065682 kubelet[2688]: I0428 02:49:01.063933 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b51c0c3e-fb85-4791-a4da-124042c0f74d-socket-dir\") pod \"csi-node-driver-r758r\" (UID: \"b51c0c3e-fb85-4791-a4da-124042c0f74d\") " pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:01.065682 kubelet[2688]: I0428 02:49:01.064015 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b51c0c3e-fb85-4791-a4da-124042c0f74d-varrun\") pod \"csi-node-driver-r758r\" (UID: \"b51c0c3e-fb85-4791-a4da-124042c0f74d\") " pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:01.115888 kubelet[2688]: E0428 02:49:01.115810 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.116049 kubelet[2688]: W0428 02:49:01.115883 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.116115 kubelet[2688]: E0428 02:49:01.115972 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.126242 containerd[1523]: time="2026-04-28T02:49:01.126060979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:01.126242 containerd[1523]: time="2026-04-28T02:49:01.126212043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:01.126545 containerd[1523]: time="2026-04-28T02:49:01.126428314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:01.126954 containerd[1523]: time="2026-04-28T02:49:01.126884363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.165409 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.166572 kubelet[2688]: W0428 02:49:01.165463 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.165499 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.165936 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.166572 kubelet[2688]: W0428 02:49:01.165969 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.166026 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.166403 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.166572 kubelet[2688]: W0428 02:49:01.166418 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.166572 kubelet[2688]: E0428 02:49:01.166469 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.167150 kubelet[2688]: E0428 02:49:01.166813 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.167150 kubelet[2688]: W0428 02:49:01.166828 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.167150 kubelet[2688]: E0428 02:49:01.166846 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.167150 kubelet[2688]: E0428 02:49:01.167133 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.167359 kubelet[2688]: W0428 02:49:01.167164 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.167359 kubelet[2688]: E0428 02:49:01.167180 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.167519 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.168652 kubelet[2688]: W0428 02:49:01.167548 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.167576 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.167923 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.168652 kubelet[2688]: W0428 02:49:01.167938 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.167972 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.168291 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.168652 kubelet[2688]: W0428 02:49:01.168305 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.168652 kubelet[2688]: E0428 02:49:01.168319 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.169694 kubelet[2688]: E0428 02:49:01.168759 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.169694 kubelet[2688]: W0428 02:49:01.168774 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.169694 kubelet[2688]: E0428 02:49:01.168790 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.171378 kubelet[2688]: E0428 02:49:01.170483 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.171378 kubelet[2688]: W0428 02:49:01.170506 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.171378 kubelet[2688]: E0428 02:49:01.170523 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.171378 kubelet[2688]: E0428 02:49:01.170851 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.171378 kubelet[2688]: W0428 02:49:01.170877 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.171378 kubelet[2688]: E0428 02:49:01.170892 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.172590 kubelet[2688]: E0428 02:49:01.171453 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.172590 kubelet[2688]: W0428 02:49:01.171468 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.172590 kubelet[2688]: E0428 02:49:01.171484 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.173393 kubelet[2688]: E0428 02:49:01.173195 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.173393 kubelet[2688]: W0428 02:49:01.173215 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.173393 kubelet[2688]: E0428 02:49:01.173231 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.174715 kubelet[2688]: E0428 02:49:01.174676 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.174805 kubelet[2688]: W0428 02:49:01.174700 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.174805 kubelet[2688]: E0428 02:49:01.174763 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.177639 kubelet[2688]: E0428 02:49:01.175642 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.177639 kubelet[2688]: W0428 02:49:01.175670 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.177639 kubelet[2688]: E0428 02:49:01.175687 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.177812 containerd[1523]: time="2026-04-28T02:49:01.177390193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clbnm,Uid:6552ae9a-4092-41fb-87e0-be803f0b702b,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:01.178258 kubelet[2688]: E0428 02:49:01.178227 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.178258 kubelet[2688]: W0428 02:49:01.178250 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.178385 kubelet[2688]: E0428 02:49:01.178268 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.179289 kubelet[2688]: E0428 02:49:01.179264 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.179289 kubelet[2688]: W0428 02:49:01.179284 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.179486 kubelet[2688]: E0428 02:49:01.179299 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.180992 kubelet[2688]: E0428 02:49:01.180959 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.180992 kubelet[2688]: W0428 02:49:01.180989 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.181187 kubelet[2688]: E0428 02:49:01.181006 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.182742 kubelet[2688]: E0428 02:49:01.182714 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.182742 kubelet[2688]: W0428 02:49:01.182737 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.182903 kubelet[2688]: E0428 02:49:01.182754 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.183846 kubelet[2688]: E0428 02:49:01.183804 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.184326 kubelet[2688]: W0428 02:49:01.184125 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.184326 kubelet[2688]: E0428 02:49:01.184152 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.185730 kubelet[2688]: E0428 02:49:01.185707 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.185730 kubelet[2688]: W0428 02:49:01.185729 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.185893 kubelet[2688]: E0428 02:49:01.185746 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.187357 kubelet[2688]: E0428 02:49:01.187087 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.187357 kubelet[2688]: W0428 02:49:01.187127 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.187357 kubelet[2688]: E0428 02:49:01.187155 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.188099 kubelet[2688]: E0428 02:49:01.187798 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.188099 kubelet[2688]: W0428 02:49:01.187981 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.188099 kubelet[2688]: E0428 02:49:01.188001 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.189875 kubelet[2688]: E0428 02:49:01.189655 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.189875 kubelet[2688]: W0428 02:49:01.189675 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.189875 kubelet[2688]: E0428 02:49:01.189694 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.191387 kubelet[2688]: E0428 02:49:01.191159 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.191387 kubelet[2688]: W0428 02:49:01.191180 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.191387 kubelet[2688]: E0428 02:49:01.191196 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.218362 kubelet[2688]: E0428 02:49:01.218195 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:01.218903 kubelet[2688]: W0428 02:49:01.218798 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:01.218903 kubelet[2688]: E0428 02:49:01.218840 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:01.230863 systemd[1]: Started cri-containerd-a57ab89efda5cd8f3343715033e69c2576ec525d4b52f7fc807b8d53e8ab6106.scope - libcontainer container a57ab89efda5cd8f3343715033e69c2576ec525d4b52f7fc807b8d53e8ab6106. Apr 28 02:49:01.269691 containerd[1523]: time="2026-04-28T02:49:01.269148993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:01.270632 containerd[1523]: time="2026-04-28T02:49:01.269736913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:01.270632 containerd[1523]: time="2026-04-28T02:49:01.270024817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:01.270632 containerd[1523]: time="2026-04-28T02:49:01.270231047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:01.312026 systemd[1]: Started cri-containerd-19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8.scope - libcontainer container 19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8. Apr 28 02:49:01.343258 containerd[1523]: time="2026-04-28T02:49:01.342558523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d999b7d5-6tx9r,Uid:403d0684-93af-4489-b3b8-6c36544bdddf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a57ab89efda5cd8f3343715033e69c2576ec525d4b52f7fc807b8d53e8ab6106\"" Apr 28 02:49:01.347177 containerd[1523]: time="2026-04-28T02:49:01.347099061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 28 02:49:01.368416 containerd[1523]: time="2026-04-28T02:49:01.368344110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clbnm,Uid:6552ae9a-4092-41fb-87e0-be803f0b702b,Namespace:calico-system,Attempt:0,} returns sandbox id \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\"" Apr 28 02:49:02.786948 kubelet[2688]: E0428 02:49:02.786076 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:03.009661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891867197.mount: Deactivated successfully. Apr 28 02:49:04.607602 containerd[1523]: time="2026-04-28T02:49:04.607332111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:04.609114 containerd[1523]: time="2026-04-28T02:49:04.608883431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 28 02:49:04.610044 containerd[1523]: time="2026-04-28T02:49:04.609997112Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:04.614979 containerd[1523]: time="2026-04-28T02:49:04.614418984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:04.616121 containerd[1523]: time="2026-04-28T02:49:04.615218245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 3.267887807s" Apr 28 02:49:04.616121 containerd[1523]: time="2026-04-28T02:49:04.615267364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 28 02:49:04.617918 containerd[1523]: time="2026-04-28T02:49:04.617884802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 28 02:49:04.666499 containerd[1523]: time="2026-04-28T02:49:04.666411346Z" level=info msg="CreateContainer within sandbox \"a57ab89efda5cd8f3343715033e69c2576ec525d4b52f7fc807b8d53e8ab6106\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 28 02:49:04.689062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229025142.mount: Deactivated successfully. Apr 28 02:49:04.700688 containerd[1523]: time="2026-04-28T02:49:04.700598022Z" level=info msg="CreateContainer within sandbox \"a57ab89efda5cd8f3343715033e69c2576ec525d4b52f7fc807b8d53e8ab6106\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"11cc674baaccfef8e4844fa92539b74fe7738c6a9903d24d7268649bc604ed42\"" Apr 28 02:49:04.703741 containerd[1523]: time="2026-04-28T02:49:04.703695839Z" level=info msg="StartContainer for \"11cc674baaccfef8e4844fa92539b74fe7738c6a9903d24d7268649bc604ed42\"" Apr 28 02:49:04.757851 systemd[1]: Started cri-containerd-11cc674baaccfef8e4844fa92539b74fe7738c6a9903d24d7268649bc604ed42.scope - libcontainer container 11cc674baaccfef8e4844fa92539b74fe7738c6a9903d24d7268649bc604ed42. Apr 28 02:49:04.786777 kubelet[2688]: E0428 02:49:04.786672 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:04.852394 containerd[1523]: time="2026-04-28T02:49:04.852324520Z" level=info msg="StartContainer for \"11cc674baaccfef8e4844fa92539b74fe7738c6a9903d24d7268649bc604ed42\" returns successfully" Apr 28 02:49:05.029035 kubelet[2688]: E0428 02:49:05.028773 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.029035 kubelet[2688]: W0428 02:49:05.028828 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.029035 kubelet[2688]: E0428 02:49:05.028879 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.032095 kubelet[2688]: E0428 02:49:05.031867 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.032095 kubelet[2688]: W0428 02:49:05.031909 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.032095 kubelet[2688]: E0428 02:49:05.031930 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.033744 kubelet[2688]: E0428 02:49:05.032298 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.033744 kubelet[2688]: W0428 02:49:05.032313 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.033744 kubelet[2688]: E0428 02:49:05.032328 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.033744 kubelet[2688]: E0428 02:49:05.032794 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.033744 kubelet[2688]: W0428 02:49:05.032809 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.033744 kubelet[2688]: E0428 02:49:05.032837 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.033967 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.041853 kubelet[2688]: W0428 02:49:05.034002 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.034021 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.035907 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.041853 kubelet[2688]: W0428 02:49:05.035924 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.036131 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.036922 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.041853 kubelet[2688]: W0428 02:49:05.036946 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.036961 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.041853 kubelet[2688]: E0428 02:49:05.037338 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.042468 kubelet[2688]: W0428 02:49:05.037354 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.037370 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.039789 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.042468 kubelet[2688]: W0428 02:49:05.039814 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.039833 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.040142 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.042468 kubelet[2688]: W0428 02:49:05.040163 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.040179 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.042468 kubelet[2688]: E0428 02:49:05.040482 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.042468 kubelet[2688]: W0428 02:49:05.040498 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.040513 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.040865 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.044816 kubelet[2688]: W0428 02:49:05.040881 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.040909 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.041380 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.044816 kubelet[2688]: W0428 02:49:05.041397 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.041419 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.042726 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.044816 kubelet[2688]: W0428 02:49:05.042743 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.044816 kubelet[2688]: E0428 02:49:05.042758 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.045365 kubelet[2688]: E0428 02:49:05.043079 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.045365 kubelet[2688]: W0428 02:49:05.043094 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.045365 kubelet[2688]: E0428 02:49:05.043109 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.100136 kubelet[2688]: E0428 02:49:05.100098 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.100558 kubelet[2688]: W0428 02:49:05.100380 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.100558 kubelet[2688]: E0428 02:49:05.100412 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.101704 kubelet[2688]: E0428 02:49:05.101073 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.101704 kubelet[2688]: W0428 02:49:05.101092 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.101704 kubelet[2688]: E0428 02:49:05.101108 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.102382 kubelet[2688]: E0428 02:49:05.102259 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.102382 kubelet[2688]: W0428 02:49:05.102278 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.102382 kubelet[2688]: E0428 02:49:05.102295 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.103583 kubelet[2688]: E0428 02:49:05.103315 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.103583 kubelet[2688]: W0428 02:49:05.103336 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.103583 kubelet[2688]: E0428 02:49:05.103352 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.104199 kubelet[2688]: E0428 02:49:05.104032 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.104199 kubelet[2688]: W0428 02:49:05.104054 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.104199 kubelet[2688]: E0428 02:49:05.104071 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.105215 kubelet[2688]: E0428 02:49:05.105018 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.105215 kubelet[2688]: W0428 02:49:05.105049 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.105215 kubelet[2688]: E0428 02:49:05.105067 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.106843 kubelet[2688]: E0428 02:49:05.106715 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.106843 kubelet[2688]: W0428 02:49:05.106735 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.106843 kubelet[2688]: E0428 02:49:05.106751 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.108103 kubelet[2688]: E0428 02:49:05.107814 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.108103 kubelet[2688]: W0428 02:49:05.107833 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.108103 kubelet[2688]: E0428 02:49:05.107850 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.108494 kubelet[2688]: E0428 02:49:05.108392 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.108494 kubelet[2688]: W0428 02:49:05.108411 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.108494 kubelet[2688]: E0428 02:49:05.108427 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.109015 kubelet[2688]: E0428 02:49:05.108995 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.109490 kubelet[2688]: W0428 02:49:05.109374 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.109490 kubelet[2688]: E0428 02:49:05.109400 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.110175 kubelet[2688]: E0428 02:49:05.109973 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.110175 kubelet[2688]: W0428 02:49:05.109992 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.110175 kubelet[2688]: E0428 02:49:05.110019 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.110529 kubelet[2688]: E0428 02:49:05.110460 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.110529 kubelet[2688]: W0428 02:49:05.110482 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.110529 kubelet[2688]: E0428 02:49:05.110499 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.112097 kubelet[2688]: E0428 02:49:05.112077 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.112642 kubelet[2688]: W0428 02:49:05.112206 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.112642 kubelet[2688]: E0428 02:49:05.112232 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.113103 kubelet[2688]: E0428 02:49:05.113085 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.113210 kubelet[2688]: W0428 02:49:05.113191 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.113346 kubelet[2688]: E0428 02:49:05.113322 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.118644 kubelet[2688]: E0428 02:49:05.116691 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.118644 kubelet[2688]: W0428 02:49:05.116711 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.118644 kubelet[2688]: E0428 02:49:05.116728 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.119237 kubelet[2688]: E0428 02:49:05.119217 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.119435 kubelet[2688]: W0428 02:49:05.119377 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.119562 kubelet[2688]: E0428 02:49:05.119539 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.122293 kubelet[2688]: E0428 02:49:05.122271 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.122569 kubelet[2688]: W0428 02:49:05.122381 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.122569 kubelet[2688]: E0428 02:49:05.122407 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:05.125018 kubelet[2688]: E0428 02:49:05.124905 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:05.125018 kubelet[2688]: W0428 02:49:05.124924 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:05.125018 kubelet[2688]: E0428 02:49:05.124978 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.001574 kubelet[2688]: I0428 02:49:06.001378 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54d999b7d5-6tx9r" podStartSLOduration=2.730707024 podStartE2EDuration="6.00134175s" podCreationTimestamp="2026-04-28 02:49:00 +0000 UTC" firstStartedPulling="2026-04-28 02:49:01.346500212 +0000 UTC m=+21.854414157" lastFinishedPulling="2026-04-28 02:49:04.617134929 +0000 UTC m=+25.125048883" observedRunningTime="2026-04-28 02:49:05.014728989 +0000 UTC m=+25.522643001" watchObservedRunningTime="2026-04-28 02:49:06.00134175 +0000 UTC m=+26.509255704" Apr 28 02:49:06.049585 kubelet[2688]: E0428 02:49:06.049536 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.049905 kubelet[2688]: W0428 02:49:06.049808 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.050053 kubelet[2688]: E0428 02:49:06.050026 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.050632 kubelet[2688]: E0428 02:49:06.050598 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.050773 kubelet[2688]: W0428 02:49:06.050751 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.051121 kubelet[2688]: E0428 02:49:06.050843 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.052031 kubelet[2688]: E0428 02:49:06.051443 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.052031 kubelet[2688]: W0428 02:49:06.051463 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.052031 kubelet[2688]: E0428 02:49:06.051479 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.053085 kubelet[2688]: E0428 02:49:06.053001 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.053085 kubelet[2688]: W0428 02:49:06.053023 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.053452 kubelet[2688]: E0428 02:49:06.053161 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.053760 kubelet[2688]: E0428 02:49:06.053719 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.053927 kubelet[2688]: W0428 02:49:06.053904 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.054356 kubelet[2688]: E0428 02:49:06.054120 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.055913 kubelet[2688]: E0428 02:49:06.055734 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.055913 kubelet[2688]: W0428 02:49:06.055754 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.055913 kubelet[2688]: E0428 02:49:06.055770 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.056334 kubelet[2688]: E0428 02:49:06.056157 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.056334 kubelet[2688]: W0428 02:49:06.056175 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.056334 kubelet[2688]: E0428 02:49:06.056201 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.057212 kubelet[2688]: E0428 02:49:06.056776 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.057212 kubelet[2688]: W0428 02:49:06.056796 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.057212 kubelet[2688]: E0428 02:49:06.056825 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.057853 kubelet[2688]: E0428 02:49:06.057832 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.058073 kubelet[2688]: W0428 02:49:06.058051 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.058795 kubelet[2688]: E0428 02:49:06.058314 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.058994 kubelet[2688]: E0428 02:49:06.058972 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.059705 kubelet[2688]: W0428 02:49:06.059666 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.059787 kubelet[2688]: E0428 02:49:06.059710 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.060488 kubelet[2688]: E0428 02:49:06.060462 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.060488 kubelet[2688]: W0428 02:49:06.060484 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.060596 kubelet[2688]: E0428 02:49:06.060501 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.061130 kubelet[2688]: E0428 02:49:06.061103 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.061130 kubelet[2688]: W0428 02:49:06.061125 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.061240 kubelet[2688]: E0428 02:49:06.061141 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.061773 kubelet[2688]: E0428 02:49:06.061744 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.061773 kubelet[2688]: W0428 02:49:06.061769 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.061900 kubelet[2688]: E0428 02:49:06.061786 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.062663 kubelet[2688]: E0428 02:49:06.062629 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.062663 kubelet[2688]: W0428 02:49:06.062651 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.062818 kubelet[2688]: E0428 02:49:06.062681 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.063365 kubelet[2688]: E0428 02:49:06.063329 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.063440 kubelet[2688]: W0428 02:49:06.063354 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.063440 kubelet[2688]: E0428 02:49:06.063406 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.111947 kubelet[2688]: E0428 02:49:06.111898 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.111947 kubelet[2688]: W0428 02:49:06.111937 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.112207 kubelet[2688]: E0428 02:49:06.111966 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.112962 kubelet[2688]: E0428 02:49:06.112394 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.112962 kubelet[2688]: W0428 02:49:06.112417 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.112962 kubelet[2688]: E0428 02:49:06.112434 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.113336 kubelet[2688]: E0428 02:49:06.113313 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.113336 kubelet[2688]: W0428 02:49:06.113336 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.113443 kubelet[2688]: E0428 02:49:06.113354 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.113858 kubelet[2688]: E0428 02:49:06.113834 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.113858 kubelet[2688]: W0428 02:49:06.113855 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.113950 kubelet[2688]: E0428 02:49:06.113872 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.114594 kubelet[2688]: E0428 02:49:06.114569 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.114594 kubelet[2688]: W0428 02:49:06.114591 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.114803 kubelet[2688]: E0428 02:49:06.114608 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.115675 kubelet[2688]: E0428 02:49:06.115266 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.115675 kubelet[2688]: W0428 02:49:06.115287 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.115675 kubelet[2688]: E0428 02:49:06.115303 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.115874 kubelet[2688]: E0428 02:49:06.115808 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.115874 kubelet[2688]: W0428 02:49:06.115847 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.115874 kubelet[2688]: E0428 02:49:06.115862 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.116413 kubelet[2688]: E0428 02:49:06.116388 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.116413 kubelet[2688]: W0428 02:49:06.116410 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.116535 kubelet[2688]: E0428 02:49:06.116427 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.118490 kubelet[2688]: E0428 02:49:06.117693 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.118790 kubelet[2688]: W0428 02:49:06.117731 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.118790 kubelet[2688]: E0428 02:49:06.118610 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.119156 kubelet[2688]: E0428 02:49:06.118998 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.119156 kubelet[2688]: W0428 02:49:06.119017 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.119156 kubelet[2688]: E0428 02:49:06.119033 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.119593 kubelet[2688]: E0428 02:49:06.119389 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.119593 kubelet[2688]: W0428 02:49:06.119424 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.119593 kubelet[2688]: E0428 02:49:06.119442 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.119900 kubelet[2688]: E0428 02:49:06.119879 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.120012 kubelet[2688]: W0428 02:49:06.119991 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.120160 kubelet[2688]: E0428 02:49:06.120139 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.120731 kubelet[2688]: E0428 02:49:06.120710 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.120953 kubelet[2688]: W0428 02:49:06.120930 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.121068 kubelet[2688]: E0428 02:49:06.121048 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.121707 kubelet[2688]: E0428 02:49:06.121686 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.122270 kubelet[2688]: W0428 02:49:06.122052 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.122270 kubelet[2688]: E0428 02:49:06.122078 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.123537 kubelet[2688]: E0428 02:49:06.122791 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.123537 kubelet[2688]: W0428 02:49:06.122810 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.123537 kubelet[2688]: E0428 02:49:06.122840 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.123908 kubelet[2688]: E0428 02:49:06.123883 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.123908 kubelet[2688]: W0428 02:49:06.123905 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.124013 kubelet[2688]: E0428 02:49:06.123922 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.124562 kubelet[2688]: E0428 02:49:06.124535 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.124562 kubelet[2688]: W0428 02:49:06.124556 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.124706 kubelet[2688]: E0428 02:49:06.124572 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.125016 kubelet[2688]: E0428 02:49:06.124993 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 02:49:06.125016 kubelet[2688]: W0428 02:49:06.125016 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 02:49:06.125143 kubelet[2688]: E0428 02:49:06.125033 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 02:49:06.193171 containerd[1523]: time="2026-04-28T02:49:06.193007004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:06.195355 containerd[1523]: time="2026-04-28T02:49:06.194593592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 28 02:49:06.204670 containerd[1523]: time="2026-04-28T02:49:06.204404759Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:06.208942 containerd[1523]: time="2026-04-28T02:49:06.208889827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:06.209701 containerd[1523]: time="2026-04-28T02:49:06.209662366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 1.59172945s" Apr 28 02:49:06.209770 containerd[1523]: time="2026-04-28T02:49:06.209709888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 28 02:49:06.217309 containerd[1523]: time="2026-04-28T02:49:06.217265598Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 28 02:49:06.288797 containerd[1523]: time="2026-04-28T02:49:06.287436503Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46\"" Apr 28 02:49:06.292660 containerd[1523]: time="2026-04-28T02:49:06.289826780Z" level=info msg="StartContainer for \"f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46\"" Apr 28 02:49:06.359887 systemd[1]: Started cri-containerd-f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46.scope - libcontainer container f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46. Apr 28 02:49:06.445303 systemd[1]: cri-containerd-f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46.scope: Deactivated successfully. Apr 28 02:49:06.447264 containerd[1523]: time="2026-04-28T02:49:06.445781015Z" level=info msg="StartContainer for \"f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46\" returns successfully" Apr 28 02:49:06.637193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46-rootfs.mount: Deactivated successfully. Apr 28 02:49:06.749178 containerd[1523]: time="2026-04-28T02:49:06.736090462Z" level=info msg="shim disconnected" id=f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46 namespace=k8s.io Apr 28 02:49:06.749178 containerd[1523]: time="2026-04-28T02:49:06.749016968Z" level=warning msg="cleaning up after shim disconnected" id=f3858bd2dcc6aa6c66babbf6d569a51a4b3e927992b5d0ef83818290b44f5e46 namespace=k8s.io Apr 28 02:49:06.749178 containerd[1523]: time="2026-04-28T02:49:06.749041042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:06.786338 kubelet[2688]: E0428 02:49:06.785829 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:06.987419 containerd[1523]: time="2026-04-28T02:49:06.986880590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 28 02:49:08.787839 kubelet[2688]: E0428 02:49:08.785853 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:10.788255 kubelet[2688]: E0428 02:49:10.787540 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:12.788720 kubelet[2688]: E0428 02:49:12.786888 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:14.786368 kubelet[2688]: E0428 02:49:14.786053 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:16.786645 kubelet[2688]: E0428 02:49:16.786225 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:17.165667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194624129.mount: Deactivated successfully. Apr 28 02:49:17.233540 containerd[1523]: time="2026-04-28T02:49:17.231928839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 28 02:49:17.233540 containerd[1523]: time="2026-04-28T02:49:17.226820884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:17.235462 containerd[1523]: time="2026-04-28T02:49:17.235427217Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:17.238942 containerd[1523]: time="2026-04-28T02:49:17.238896433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:17.239977 containerd[1523]: time="2026-04-28T02:49:17.239929218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 10.252985993s" Apr 28 02:49:17.240062 containerd[1523]: time="2026-04-28T02:49:17.239991865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 28 02:49:17.291769 containerd[1523]: time="2026-04-28T02:49:17.291695279Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 28 02:49:17.319735 containerd[1523]: time="2026-04-28T02:49:17.319425445Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29\"" Apr 28 02:49:17.323880 containerd[1523]: time="2026-04-28T02:49:17.320815192Z" level=info msg="StartContainer for \"57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29\"" Apr 28 02:49:17.320934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131709473.mount: Deactivated successfully. Apr 28 02:49:17.393306 systemd[1]: Started cri-containerd-57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29.scope - libcontainer container 57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29. Apr 28 02:49:17.533997 containerd[1523]: time="2026-04-28T02:49:17.533818927Z" level=info msg="StartContainer for \"57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29\" returns successfully" Apr 28 02:49:17.625799 systemd[1]: cri-containerd-57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29.scope: Deactivated successfully. Apr 28 02:49:17.664875 containerd[1523]: time="2026-04-28T02:49:17.664733355Z" level=info msg="shim disconnected" id=57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29 namespace=k8s.io Apr 28 02:49:17.664875 containerd[1523]: time="2026-04-28T02:49:17.664861435Z" level=warning msg="cleaning up after shim disconnected" id=57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29 namespace=k8s.io Apr 28 02:49:17.664875 containerd[1523]: time="2026-04-28T02:49:17.664883271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:18.039436 containerd[1523]: time="2026-04-28T02:49:18.038894378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 28 02:49:18.164891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57cb9e2e907cd513b148d71033077eaa414dda531a87b9267ab8ed77885dfa29-rootfs.mount: Deactivated successfully. Apr 28 02:49:18.786029 kubelet[2688]: E0428 02:49:18.785922 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:20.786967 kubelet[2688]: E0428 02:49:20.786697 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:22.749444 containerd[1523]: time="2026-04-28T02:49:22.748181254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:22.751407 containerd[1523]: time="2026-04-28T02:49:22.751344229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 28 02:49:22.753277 containerd[1523]: time="2026-04-28T02:49:22.752859921Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:22.756720 containerd[1523]: time="2026-04-28T02:49:22.756646435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:22.758339 containerd[1523]: time="2026-04-28T02:49:22.757847607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 4.718899933s" Apr 28 02:49:22.758339 containerd[1523]: time="2026-04-28T02:49:22.757894556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 28 02:49:22.763814 containerd[1523]: time="2026-04-28T02:49:22.763733192Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 02:49:22.787488 kubelet[2688]: E0428 02:49:22.787016 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:22.796971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297643874.mount: Deactivated successfully. Apr 28 02:49:22.801034 containerd[1523]: time="2026-04-28T02:49:22.800976562Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1\"" Apr 28 02:49:22.814170 containerd[1523]: time="2026-04-28T02:49:22.814011686Z" level=info msg="StartContainer for \"192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1\"" Apr 28 02:49:22.882845 systemd[1]: Started cri-containerd-192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1.scope - libcontainer container 192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1. Apr 28 02:49:22.947016 containerd[1523]: time="2026-04-28T02:49:22.946954709Z" level=info msg="StartContainer for \"192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1\" returns successfully" Apr 28 02:49:24.155766 systemd[1]: cri-containerd-192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1.scope: Deactivated successfully. Apr 28 02:49:24.217170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1-rootfs.mount: Deactivated successfully. Apr 28 02:49:24.229522 kubelet[2688]: I0428 02:49:24.209171 2688 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 28 02:49:24.230446 containerd[1523]: time="2026-04-28T02:49:24.228351779Z" level=info msg="shim disconnected" id=192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1 namespace=k8s.io Apr 28 02:49:24.230446 containerd[1523]: time="2026-04-28T02:49:24.228450178Z" level=warning msg="cleaning up after shim disconnected" id=192a85619ce9f605b7fd96546528cebc41ff93eae1857fd9cb8ca7701cab71b1 namespace=k8s.io Apr 28 02:49:24.230446 containerd[1523]: time="2026-04-28T02:49:24.228465587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:24.497418 kubelet[2688]: I0428 02:49:24.496691 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f779p\" (UniqueName: \"kubernetes.io/projected/3761dcc7-adab-40a0-94ad-c80888682a66-kube-api-access-f779p\") pod \"goldmane-57885fdd4c-d9nd5\" (UID: \"3761dcc7-adab-40a0-94ad-c80888682a66\") " pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:24.497418 kubelet[2688]: I0428 02:49:24.496767 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-nginx-config\") pod \"whisker-5c6685bb88-n77bq\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:24.497418 kubelet[2688]: I0428 02:49:24.496820 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3761dcc7-adab-40a0-94ad-c80888682a66-goldmane-key-pair\") pod \"goldmane-57885fdd4c-d9nd5\" (UID: \"3761dcc7-adab-40a0-94ad-c80888682a66\") " pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:24.497418 kubelet[2688]: I0428 02:49:24.496875 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l565h\" (UniqueName: \"kubernetes.io/projected/53cc779b-07b6-4618-82bd-00d7d06d83e0-kube-api-access-l565h\") pod \"coredns-674b8bbfcf-v8l6s\" (UID: \"53cc779b-07b6-4618-82bd-00d7d06d83e0\") " pod="kube-system/coredns-674b8bbfcf-v8l6s" Apr 28 02:49:24.497418 kubelet[2688]: I0428 02:49:24.496929 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-backend-key-pair\") pod \"whisker-5c6685bb88-n77bq\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:24.501471 kubelet[2688]: I0428 02:49:24.496964 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q8jz\" (UniqueName: \"kubernetes.io/projected/8eb06395-ddec-47db-811d-5529c83facdc-kube-api-access-7q8jz\") pod \"calico-apiserver-6c564fdf9d-fpdb9\" (UID: \"8eb06395-ddec-47db-811d-5529c83facdc\") " pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" Apr 28 02:49:24.501471 kubelet[2688]: I0428 02:49:24.496997 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3761dcc7-adab-40a0-94ad-c80888682a66-goldmane-ca-bundle\") pod \"goldmane-57885fdd4c-d9nd5\" (UID: \"3761dcc7-adab-40a0-94ad-c80888682a66\") " pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:24.501471 kubelet[2688]: I0428 02:49:24.497044 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3761dcc7-adab-40a0-94ad-c80888682a66-config\") pod \"goldmane-57885fdd4c-d9nd5\" (UID: \"3761dcc7-adab-40a0-94ad-c80888682a66\") " pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:24.501471 kubelet[2688]: I0428 02:49:24.497081 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53cc779b-07b6-4618-82bd-00d7d06d83e0-config-volume\") pod \"coredns-674b8bbfcf-v8l6s\" (UID: \"53cc779b-07b6-4618-82bd-00d7d06d83e0\") " pod="kube-system/coredns-674b8bbfcf-v8l6s" Apr 28 02:49:24.501471 kubelet[2688]: I0428 02:49:24.497109 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-ca-bundle\") pod \"whisker-5c6685bb88-n77bq\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:24.502258 kubelet[2688]: I0428 02:49:24.497142 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjl2\" (UniqueName: \"kubernetes.io/projected/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-kube-api-access-vgjl2\") pod \"whisker-5c6685bb88-n77bq\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:24.502258 kubelet[2688]: I0428 02:49:24.497181 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8eb06395-ddec-47db-811d-5529c83facdc-calico-apiserver-certs\") pod \"calico-apiserver-6c564fdf9d-fpdb9\" (UID: \"8eb06395-ddec-47db-811d-5529c83facdc\") " pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" Apr 28 02:49:24.535597 systemd[1]: Created slice kubepods-besteffort-pod3761dcc7_adab_40a0_94ad_c80888682a66.slice - libcontainer container kubepods-besteffort-pod3761dcc7_adab_40a0_94ad_c80888682a66.slice. Apr 28 02:49:24.539509 systemd[1]: Created slice kubepods-besteffort-pod8fe71756_1731_4d53_9ef9_9f0198a1b0e5.slice - libcontainer container kubepods-besteffort-pod8fe71756_1731_4d53_9ef9_9f0198a1b0e5.slice. Apr 28 02:49:24.542552 systemd[1]: Created slice kubepods-burstable-pod4be42816_d109_44ed_99ff_f1618cbf739e.slice - libcontainer container kubepods-burstable-pod4be42816_d109_44ed_99ff_f1618cbf739e.slice. Apr 28 02:49:24.556238 systemd[1]: Created slice kubepods-burstable-pod53cc779b_07b6_4618_82bd_00d7d06d83e0.slice - libcontainer container kubepods-burstable-pod53cc779b_07b6_4618_82bd_00d7d06d83e0.slice. Apr 28 02:49:24.572018 systemd[1]: Created slice kubepods-besteffort-pod8c1c770b_95e1_4efa_8dd8_c75266e36ef1.slice - libcontainer container kubepods-besteffort-pod8c1c770b_95e1_4efa_8dd8_c75266e36ef1.slice. Apr 28 02:49:24.596763 systemd[1]: Created slice kubepods-besteffort-pod5ea8300d_6e47_4707_97ba_70635cc935f5.slice - libcontainer container kubepods-besteffort-pod5ea8300d_6e47_4707_97ba_70635cc935f5.slice. Apr 28 02:49:24.608638 kubelet[2688]: I0428 02:49:24.608436 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25tw\" (UniqueName: \"kubernetes.io/projected/8fe71756-1731-4d53-9ef9-9f0198a1b0e5-kube-api-access-r25tw\") pod \"calico-apiserver-6c564fdf9d-86vnc\" (UID: \"8fe71756-1731-4d53-9ef9-9f0198a1b0e5\") " pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" Apr 28 02:49:24.615165 kubelet[2688]: I0428 02:49:24.608600 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea8300d-6e47-4707-97ba-70635cc935f5-tigera-ca-bundle\") pod \"calico-kube-controllers-55d9f7668-qwcld\" (UID: \"5ea8300d-6e47-4707-97ba-70635cc935f5\") " pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" Apr 28 02:49:24.615165 kubelet[2688]: I0428 02:49:24.612969 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4be42816-d109-44ed-99ff-f1618cbf739e-config-volume\") pod \"coredns-674b8bbfcf-ndml9\" (UID: \"4be42816-d109-44ed-99ff-f1618cbf739e\") " pod="kube-system/coredns-674b8bbfcf-ndml9" Apr 28 02:49:24.615165 kubelet[2688]: I0428 02:49:24.613012 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rglv\" (UniqueName: \"kubernetes.io/projected/4be42816-d109-44ed-99ff-f1618cbf739e-kube-api-access-8rglv\") pod \"coredns-674b8bbfcf-ndml9\" (UID: \"4be42816-d109-44ed-99ff-f1618cbf739e\") " pod="kube-system/coredns-674b8bbfcf-ndml9" Apr 28 02:49:24.615165 kubelet[2688]: I0428 02:49:24.613122 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fe71756-1731-4d53-9ef9-9f0198a1b0e5-calico-apiserver-certs\") pod \"calico-apiserver-6c564fdf9d-86vnc\" (UID: \"8fe71756-1731-4d53-9ef9-9f0198a1b0e5\") " pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" Apr 28 02:49:24.615165 kubelet[2688]: I0428 02:49:24.613272 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm52t\" (UniqueName: \"kubernetes.io/projected/5ea8300d-6e47-4707-97ba-70635cc935f5-kube-api-access-zm52t\") pod \"calico-kube-controllers-55d9f7668-qwcld\" (UID: \"5ea8300d-6e47-4707-97ba-70635cc935f5\") " pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" Apr 28 02:49:24.698060 systemd[1]: Created slice kubepods-besteffort-pod8eb06395_ddec_47db_811d_5529c83facdc.slice - libcontainer container kubepods-besteffort-pod8eb06395_ddec_47db_811d_5529c83facdc.slice. Apr 28 02:49:24.723571 containerd[1523]: time="2026-04-28T02:49:24.722584504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-fpdb9,Uid:8eb06395-ddec-47db-811d-5529c83facdc,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:24.806626 systemd[1]: Created slice kubepods-besteffort-podb51c0c3e_fb85_4791_a4da_124042c0f74d.slice - libcontainer container kubepods-besteffort-podb51c0c3e_fb85_4791_a4da_124042c0f74d.slice. Apr 28 02:49:24.821243 containerd[1523]: time="2026-04-28T02:49:24.820736265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r758r,Uid:b51c0c3e-fb85-4791-a4da-124042c0f74d,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:24.858059 containerd[1523]: time="2026-04-28T02:49:24.858004783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-86vnc,Uid:8fe71756-1731-4d53-9ef9-9f0198a1b0e5,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:24.858533 containerd[1523]: time="2026-04-28T02:49:24.858331468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-d9nd5,Uid:3761dcc7-adab-40a0-94ad-c80888682a66,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:24.858654 containerd[1523]: time="2026-04-28T02:49:24.858589501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ndml9,Uid:4be42816-d109-44ed-99ff-f1618cbf739e,Namespace:kube-system,Attempt:0,}" Apr 28 02:49:24.870583 containerd[1523]: time="2026-04-28T02:49:24.867995553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8l6s,Uid:53cc779b-07b6-4618-82bd-00d7d06d83e0,Namespace:kube-system,Attempt:0,}" Apr 28 02:49:24.889827 containerd[1523]: time="2026-04-28T02:49:24.889549111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6685bb88-n77bq,Uid:8c1c770b-95e1-4efa-8dd8-c75266e36ef1,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:24.963777 containerd[1523]: time="2026-04-28T02:49:24.963707228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d9f7668-qwcld,Uid:5ea8300d-6e47-4707-97ba-70635cc935f5,Namespace:calico-system,Attempt:0,}" Apr 28 02:49:25.147763 containerd[1523]: time="2026-04-28T02:49:25.147253250Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 28 02:49:25.197279 containerd[1523]: time="2026-04-28T02:49:25.197174223Z" level=info msg="CreateContainer within sandbox \"19388703c8591eddf9ac59e4866cf82175b69ee80970a86d8902a77831321ef8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5\"" Apr 28 02:49:25.205439 containerd[1523]: time="2026-04-28T02:49:25.205388850Z" level=info msg="StartContainer for \"767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5\"" Apr 28 02:49:25.403125 systemd[1]: run-containerd-runc-k8s.io-767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5-runc.JlkZc2.mount: Deactivated successfully. Apr 28 02:49:25.416017 systemd[1]: Started cri-containerd-767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5.scope - libcontainer container 767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5. Apr 28 02:49:25.591865 containerd[1523]: time="2026-04-28T02:49:25.591566419Z" level=info msg="StartContainer for \"767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5\" returns successfully" Apr 28 02:49:25.616777 containerd[1523]: time="2026-04-28T02:49:25.616667573Z" level=error msg="Failed to destroy network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.620194 containerd[1523]: time="2026-04-28T02:49:25.620141334Z" level=error msg="encountered an error cleaning up failed sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.620286 containerd[1523]: time="2026-04-28T02:49:25.620249312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ndml9,Uid:4be42816-d109-44ed-99ff-f1618cbf739e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.624560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84-shm.mount: Deactivated successfully. Apr 28 02:49:25.629864 kubelet[2688]: E0428 02:49:25.629650 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.636342 kubelet[2688]: E0428 02:49:25.634684 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ndml9" Apr 28 02:49:25.636342 kubelet[2688]: E0428 02:49:25.635980 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ndml9" Apr 28 02:49:25.636342 kubelet[2688]: E0428 02:49:25.636272 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ndml9_kube-system(4be42816-d109-44ed-99ff-f1618cbf739e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ndml9_kube-system(4be42816-d109-44ed-99ff-f1618cbf739e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ndml9" podUID="4be42816-d109-44ed-99ff-f1618cbf739e" Apr 28 02:49:25.667962 containerd[1523]: time="2026-04-28T02:49:25.666471300Z" level=error msg="Failed to destroy network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.669172 containerd[1523]: time="2026-04-28T02:49:25.669105443Z" level=error msg="encountered an error cleaning up failed sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.669636 containerd[1523]: time="2026-04-28T02:49:25.669578575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-fpdb9,Uid:8eb06395-ddec-47db-811d-5529c83facdc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.670823 kubelet[2688]: E0428 02:49:25.670682 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.670823 kubelet[2688]: E0428 02:49:25.670789 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" Apr 28 02:49:25.671431 kubelet[2688]: E0428 02:49:25.670864 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" Apr 28 02:49:25.671431 kubelet[2688]: E0428 02:49:25.671017 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c564fdf9d-fpdb9_calico-system(8eb06395-ddec-47db-811d-5529c83facdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c564fdf9d-fpdb9_calico-system(8eb06395-ddec-47db-811d-5529c83facdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" podUID="8eb06395-ddec-47db-811d-5529c83facdc" Apr 28 02:49:25.755695 containerd[1523]: time="2026-04-28T02:49:25.755483450Z" level=error msg="Failed to destroy network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.757141 containerd[1523]: time="2026-04-28T02:49:25.756765759Z" level=error msg="encountered an error cleaning up failed sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.757141 containerd[1523]: time="2026-04-28T02:49:25.756864814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6685bb88-n77bq,Uid:8c1c770b-95e1-4efa-8dd8-c75266e36ef1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.757537 kubelet[2688]: E0428 02:49:25.757265 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.757537 kubelet[2688]: E0428 02:49:25.757390 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:25.757537 kubelet[2688]: E0428 02:49:25.757440 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6685bb88-n77bq" Apr 28 02:49:25.758383 kubelet[2688]: E0428 02:49:25.757581 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c6685bb88-n77bq_calico-system(8c1c770b-95e1-4efa-8dd8-c75266e36ef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c6685bb88-n77bq_calico-system(8c1c770b-95e1-4efa-8dd8-c75266e36ef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c6685bb88-n77bq" podUID="8c1c770b-95e1-4efa-8dd8-c75266e36ef1" Apr 28 02:49:25.767730 containerd[1523]: time="2026-04-28T02:49:25.767489765Z" level=error msg="Failed to destroy network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.767730 containerd[1523]: time="2026-04-28T02:49:25.767512580Z" level=error msg="Failed to destroy network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.768366 containerd[1523]: time="2026-04-28T02:49:25.768175292Z" level=error msg="Failed to destroy network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.769036 containerd[1523]: time="2026-04-28T02:49:25.768934542Z" level=error msg="encountered an error cleaning up failed sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.769036 containerd[1523]: time="2026-04-28T02:49:25.769002507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8l6s,Uid:53cc779b-07b6-4618-82bd-00d7d06d83e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.769636 kubelet[2688]: E0428 02:49:25.769523 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.770187 kubelet[2688]: E0428 02:49:25.769863 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8l6s" Apr 28 02:49:25.770187 kubelet[2688]: E0428 02:49:25.769951 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8l6s" Apr 28 02:49:25.770323 containerd[1523]: time="2026-04-28T02:49:25.770199088Z" level=error msg="encountered an error cleaning up failed sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.770323 containerd[1523]: time="2026-04-28T02:49:25.770251724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d9f7668-qwcld,Uid:5ea8300d-6e47-4707-97ba-70635cc935f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.771280 kubelet[2688]: E0428 02:49:25.770748 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v8l6s_kube-system(53cc779b-07b6-4618-82bd-00d7d06d83e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v8l6s_kube-system(53cc779b-07b6-4618-82bd-00d7d06d83e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v8l6s" podUID="53cc779b-07b6-4618-82bd-00d7d06d83e0" Apr 28 02:49:25.772180 kubelet[2688]: E0428 02:49:25.771583 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.772180 kubelet[2688]: E0428 02:49:25.771832 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" Apr 28 02:49:25.772180 kubelet[2688]: E0428 02:49:25.771869 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" Apr 28 02:49:25.772729 containerd[1523]: time="2026-04-28T02:49:25.771819842Z" level=error msg="Failed to destroy network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.772729 containerd[1523]: time="2026-04-28T02:49:25.772605415Z" level=error msg="encountered an error cleaning up failed sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.773322 kubelet[2688]: E0428 02:49:25.771955 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55d9f7668-qwcld_calico-system(5ea8300d-6e47-4707-97ba-70635cc935f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55d9f7668-qwcld_calico-system(5ea8300d-6e47-4707-97ba-70635cc935f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" podUID="5ea8300d-6e47-4707-97ba-70635cc935f5" Apr 28 02:49:25.773667 containerd[1523]: time="2026-04-28T02:49:25.773575799Z" level=error msg="encountered an error cleaning up failed sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.774530 containerd[1523]: time="2026-04-28T02:49:25.773933109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-86vnc,Uid:8fe71756-1731-4d53-9ef9-9f0198a1b0e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.774530 containerd[1523]: time="2026-04-28T02:49:25.773852081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r758r,Uid:b51c0c3e-fb85-4791-a4da-124042c0f74d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.774530 containerd[1523]: time="2026-04-28T02:49:25.774098096Z" level=error msg="Failed to destroy network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.774965 kubelet[2688]: E0428 02:49:25.774443 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.774965 kubelet[2688]: E0428 02:49:25.774502 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:25.774965 kubelet[2688]: E0428 02:49:25.774537 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r758r" Apr 28 02:49:25.774965 kubelet[2688]: E0428 02:49:25.774786 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.775248 kubelet[2688]: E0428 02:49:25.774909 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" Apr 28 02:49:25.775248 kubelet[2688]: E0428 02:49:25.774945 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" Apr 28 02:49:25.775248 kubelet[2688]: E0428 02:49:25.774992 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c564fdf9d-86vnc_calico-system(8fe71756-1731-4d53-9ef9-9f0198a1b0e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c564fdf9d-86vnc_calico-system(8fe71756-1731-4d53-9ef9-9f0198a1b0e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" podUID="8fe71756-1731-4d53-9ef9-9f0198a1b0e5" Apr 28 02:49:25.775439 kubelet[2688]: E0428 02:49:25.774597 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r758r_calico-system(b51c0c3e-fb85-4791-a4da-124042c0f74d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r758r_calico-system(b51c0c3e-fb85-4791-a4da-124042c0f74d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r758r" podUID="b51c0c3e-fb85-4791-a4da-124042c0f74d" Apr 28 02:49:25.776426 containerd[1523]: time="2026-04-28T02:49:25.776115270Z" level=error msg="encountered an error cleaning up failed sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.776426 containerd[1523]: time="2026-04-28T02:49:25.776272388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-d9nd5,Uid:3761dcc7-adab-40a0-94ad-c80888682a66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.776718 kubelet[2688]: E0428 02:49:25.776609 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 02:49:25.776718 kubelet[2688]: E0428 02:49:25.776686 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:25.776933 kubelet[2688]: E0428 02:49:25.776721 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-d9nd5" Apr 28 02:49:25.776933 kubelet[2688]: E0428 02:49:25.776790 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-57885fdd4c-d9nd5_calico-system(3761dcc7-adab-40a0-94ad-c80888682a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-57885fdd4c-d9nd5_calico-system(3761dcc7-adab-40a0-94ad-c80888682a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-57885fdd4c-d9nd5" podUID="3761dcc7-adab-40a0-94ad-c80888682a66" Apr 28 02:49:26.107968 kubelet[2688]: I0428 02:49:26.107787 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:26.120298 kubelet[2688]: I0428 02:49:26.119514 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:26.149990 containerd[1523]: time="2026-04-28T02:49:26.148611474Z" level=info msg="StopPodSandbox for \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\"" Apr 28 02:49:26.152661 containerd[1523]: time="2026-04-28T02:49:26.150588112Z" level=info msg="StopPodSandbox for \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\"" Apr 28 02:49:26.157498 containerd[1523]: time="2026-04-28T02:49:26.157458941Z" level=info msg="Ensure that sandbox 7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84 in task-service has been cleanup successfully" Apr 28 02:49:26.165712 containerd[1523]: time="2026-04-28T02:49:26.165671211Z" level=info msg="Ensure that sandbox 94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8 in task-service has been cleanup successfully" Apr 28 02:49:26.207211 kubelet[2688]: I0428 02:49:26.207159 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:26.221298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8-shm.mount: Deactivated successfully. Apr 28 02:49:26.221501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6-shm.mount: Deactivated successfully. Apr 28 02:49:26.221699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613-shm.mount: Deactivated successfully. Apr 28 02:49:26.222202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2-shm.mount: Deactivated successfully. Apr 28 02:49:26.222421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9-shm.mount: Deactivated successfully. Apr 28 02:49:26.224217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527-shm.mount: Deactivated successfully. Apr 28 02:49:26.224357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264-shm.mount: Deactivated successfully. Apr 28 02:49:26.232802 containerd[1523]: time="2026-04-28T02:49:26.231607630Z" level=info msg="StopPodSandbox for \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\"" Apr 28 02:49:26.235982 containerd[1523]: time="2026-04-28T02:49:26.233963165Z" level=info msg="Ensure that sandbox e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613 in task-service has been cleanup successfully" Apr 28 02:49:26.251961 kubelet[2688]: I0428 02:49:26.251912 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:26.256493 containerd[1523]: time="2026-04-28T02:49:26.256276602Z" level=info msg="StopPodSandbox for \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\"" Apr 28 02:49:26.258862 containerd[1523]: time="2026-04-28T02:49:26.258427898Z" level=info msg="Ensure that sandbox 24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2 in task-service has been cleanup successfully" Apr 28 02:49:26.322094 kubelet[2688]: I0428 02:49:26.322029 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:26.328898 containerd[1523]: time="2026-04-28T02:49:26.328750549Z" level=info msg="StopPodSandbox for \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\"" Apr 28 02:49:26.347938 containerd[1523]: time="2026-04-28T02:49:26.345552965Z" level=info msg="Ensure that sandbox 3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6 in task-service has been cleanup successfully" Apr 28 02:49:26.348661 kubelet[2688]: I0428 02:49:26.348486 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:26.360391 containerd[1523]: time="2026-04-28T02:49:26.359820078Z" level=info msg="StopPodSandbox for \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\"" Apr 28 02:49:26.363776 containerd[1523]: time="2026-04-28T02:49:26.361399951Z" level=info msg="Ensure that sandbox e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9 in task-service has been cleanup successfully" Apr 28 02:49:26.420181 kubelet[2688]: I0428 02:49:26.419976 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:26.437951 containerd[1523]: time="2026-04-28T02:49:26.435020973Z" level=info msg="StopPodSandbox for \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\"" Apr 28 02:49:26.437951 containerd[1523]: time="2026-04-28T02:49:26.435328850Z" level=info msg="Ensure that sandbox 7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527 in task-service has been cleanup successfully" Apr 28 02:49:26.453602 kubelet[2688]: I0428 02:49:26.453554 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:26.468934 containerd[1523]: time="2026-04-28T02:49:26.468862856Z" level=info msg="StopPodSandbox for \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\"" Apr 28 02:49:26.470905 containerd[1523]: time="2026-04-28T02:49:26.470872459Z" level=info msg="Ensure that sandbox 3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264 in task-service has been cleanup successfully" Apr 28 02:49:26.620768 kubelet[2688]: I0428 02:49:26.596344 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-clbnm" podStartSLOduration=5.190072951 podStartE2EDuration="26.57884757s" podCreationTimestamp="2026-04-28 02:49:00 +0000 UTC" firstStartedPulling="2026-04-28 02:49:01.371110203 +0000 UTC m=+21.879024146" lastFinishedPulling="2026-04-28 02:49:22.759884809 +0000 UTC m=+43.267798765" observedRunningTime="2026-04-28 02:49:26.19376367 +0000 UTC m=+46.701677637" watchObservedRunningTime="2026-04-28 02:49:26.57884757 +0000 UTC m=+47.086761527" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.084 [INFO][3921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.089 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" iface="eth0" netns="/var/run/netns/cni-27e074ac-d5da-f8ee-d3d6-83852deb24c2" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.092 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" iface="eth0" netns="/var/run/netns/cni-27e074ac-d5da-f8ee-d3d6-83852deb24c2" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.096 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" iface="eth0" netns="/var/run/netns/cni-27e074ac-d5da-f8ee-d3d6-83852deb24c2" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.096 [INFO][3921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.097 [INFO][3921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.407 [INFO][3991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.407 [INFO][3991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.407 [INFO][3991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.433 [WARNING][3991] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.433 [INFO][3991] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.436 [INFO][3991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.450657 containerd[1523]: 2026-04-28 02:49:27.440 [INFO][3921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:27.450657 containerd[1523]: time="2026-04-28T02:49:27.447758714Z" level=info msg="TearDown network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" successfully" Apr 28 02:49:27.450657 containerd[1523]: time="2026-04-28T02:49:27.447804141Z" level=info msg="StopPodSandbox for \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" returns successfully" Apr 28 02:49:27.453919 systemd[1]: run-netns-cni\x2d27e074ac\x2dd5da\x2df8ee\x2dd3d6\x2d83852deb24c2.mount: Deactivated successfully. Apr 28 02:49:27.471967 containerd[1523]: time="2026-04-28T02:49:27.471910778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-fpdb9,Uid:8eb06395-ddec-47db-811d-5529c83facdc,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.985 [INFO][3840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.986 [INFO][3840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" iface="eth0" netns="/var/run/netns/cni-1e76aca4-35e3-26e0-097a-4fca173f9812" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.991 [INFO][3840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" iface="eth0" netns="/var/run/netns/cni-1e76aca4-35e3-26e0-097a-4fca173f9812" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.995 [INFO][3840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" iface="eth0" netns="/var/run/netns/cni-1e76aca4-35e3-26e0-097a-4fca173f9812" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.996 [INFO][3840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:26.996 [INFO][3840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.408 [INFO][3960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.408 [INFO][3960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.438 [INFO][3960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.470 [WARNING][3960] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.470 [INFO][3960] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.476 [INFO][3960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.497699 containerd[1523]: 2026-04-28 02:49:27.489 [INFO][3840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:27.501159 systemd[1]: run-netns-cni\x2d1e76aca4\x2d35e3\x2d26e0\x2d097a\x2d4fca173f9812.mount: Deactivated successfully. Apr 28 02:49:27.504842 containerd[1523]: time="2026-04-28T02:49:27.504161773Z" level=info msg="TearDown network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" successfully" Apr 28 02:49:27.504842 containerd[1523]: time="2026-04-28T02:49:27.504201348Z" level=info msg="StopPodSandbox for \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" returns successfully" Apr 28 02:49:27.507805 containerd[1523]: time="2026-04-28T02:49:27.506583248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d9f7668-qwcld,Uid:5ea8300d-6e47-4707-97ba-70635cc935f5,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.974 [INFO][3825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.975 [INFO][3825] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" iface="eth0" netns="/var/run/netns/cni-2524be5c-7798-2be2-9f6c-436f76711251" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.978 [INFO][3825] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" iface="eth0" netns="/var/run/netns/cni-2524be5c-7798-2be2-9f6c-436f76711251" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.979 [INFO][3825] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" iface="eth0" netns="/var/run/netns/cni-2524be5c-7798-2be2-9f6c-436f76711251" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.979 [INFO][3825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:26.979 [INFO][3825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.409 [INFO][3957] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.409 [INFO][3957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.479 [INFO][3957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.536 [WARNING][3957] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.540 [INFO][3957] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.555 [INFO][3957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.591519 containerd[1523]: 2026-04-28 02:49:27.576 [INFO][3825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:27.594891 containerd[1523]: time="2026-04-28T02:49:27.592488251Z" level=info msg="TearDown network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" successfully" Apr 28 02:49:27.594891 containerd[1523]: time="2026-04-28T02:49:27.592527417Z" level=info msg="StopPodSandbox for \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" returns successfully" Apr 28 02:49:27.594891 containerd[1523]: time="2026-04-28T02:49:27.594434933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ndml9,Uid:4be42816-d109-44ed-99ff-f1618cbf739e,Namespace:kube-system,Attempt:1,}" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.053 [INFO][3849] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.053 [INFO][3849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" iface="eth0" netns="/var/run/netns/cni-16e55e26-c4a9-b2f4-5772-6a30b353a783" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.055 [INFO][3849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" iface="eth0" netns="/var/run/netns/cni-16e55e26-c4a9-b2f4-5772-6a30b353a783" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.058 [INFO][3849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" iface="eth0" netns="/var/run/netns/cni-16e55e26-c4a9-b2f4-5772-6a30b353a783" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.058 [INFO][3849] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.058 [INFO][3849] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.409 [INFO][3978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.414 [INFO][3978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.554 [INFO][3978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.610 [WARNING][3978] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.611 [INFO][3978] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.619 [INFO][3978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.643062 containerd[1523]: 2026-04-28 02:49:27.629 [INFO][3849] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:27.644279 containerd[1523]: time="2026-04-28T02:49:27.644069673Z" level=info msg="TearDown network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" successfully" Apr 28 02:49:27.644279 containerd[1523]: time="2026-04-28T02:49:27.644131477Z" level=info msg="StopPodSandbox for \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" returns successfully" Apr 28 02:49:27.648571 containerd[1523]: time="2026-04-28T02:49:27.648106265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8l6s,Uid:53cc779b-07b6-4618-82bd-00d7d06d83e0,Namespace:kube-system,Attempt:1,}" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.049 [INFO][3919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.054 [INFO][3919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" iface="eth0" netns="/var/run/netns/cni-04f67ac8-7f57-92db-faaa-4154d7f6fc2e" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.055 [INFO][3919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" iface="eth0" netns="/var/run/netns/cni-04f67ac8-7f57-92db-faaa-4154d7f6fc2e" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.056 [INFO][3919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" iface="eth0" netns="/var/run/netns/cni-04f67ac8-7f57-92db-faaa-4154d7f6fc2e" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.056 [INFO][3919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.056 [INFO][3919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.407 [INFO][3976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.414 [INFO][3976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.620 [INFO][3976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.646 [WARNING][3976] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.646 [INFO][3976] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.651 [INFO][3976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.729441 containerd[1523]: 2026-04-28 02:49:27.680 [INFO][3919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:27.731429 containerd[1523]: time="2026-04-28T02:49:27.729542120Z" level=info msg="TearDown network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" successfully" Apr 28 02:49:27.731429 containerd[1523]: time="2026-04-28T02:49:27.729588921Z" level=info msg="StopPodSandbox for \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" returns successfully" Apr 28 02:49:27.731429 containerd[1523]: time="2026-04-28T02:49:27.731369734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r758r,Uid:b51c0c3e-fb85-4791-a4da-124042c0f74d,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.089 [INFO][3890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.090 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" iface="eth0" netns="/var/run/netns/cni-7ab65a2c-8383-dca7-14c6-fc2624dd84d7" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.091 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" iface="eth0" netns="/var/run/netns/cni-7ab65a2c-8383-dca7-14c6-fc2624dd84d7" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.092 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" iface="eth0" netns="/var/run/netns/cni-7ab65a2c-8383-dca7-14c6-fc2624dd84d7" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.092 [INFO][3890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.092 [INFO][3890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.415 [INFO][3989] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.416 [INFO][3989] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.656 [INFO][3989] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.688 [WARNING][3989] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.688 [INFO][3989] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.697 [INFO][3989] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.740747 containerd[1523]: 2026-04-28 02:49:27.724 [INFO][3890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:27.743403 containerd[1523]: time="2026-04-28T02:49:27.741048730Z" level=info msg="TearDown network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" successfully" Apr 28 02:49:27.743403 containerd[1523]: time="2026-04-28T02:49:27.741093965Z" level=info msg="StopPodSandbox for \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" returns successfully" Apr 28 02:49:27.743403 containerd[1523]: time="2026-04-28T02:49:27.741949904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-86vnc,Uid:8fe71756-1731-4d53-9ef9-9f0198a1b0e5,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.006 [INFO][3864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.007 [INFO][3864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" iface="eth0" netns="/var/run/netns/cni-b2ca73d3-93c1-827a-f7af-5fd138b5ecad" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.009 [INFO][3864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" iface="eth0" netns="/var/run/netns/cni-b2ca73d3-93c1-827a-f7af-5fd138b5ecad" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.009 [INFO][3864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" iface="eth0" netns="/var/run/netns/cni-b2ca73d3-93c1-827a-f7af-5fd138b5ecad" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.009 [INFO][3864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.010 [INFO][3864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.417 [INFO][3962] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.417 [INFO][3962] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.698 [INFO][3962] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.759 [WARNING][3962] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.759 [INFO][3962] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.761 [INFO][3962] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.818516 containerd[1523]: 2026-04-28 02:49:27.781 [INFO][3864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:27.820547 containerd[1523]: time="2026-04-28T02:49:27.818511894Z" level=info msg="TearDown network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" successfully" Apr 28 02:49:27.820547 containerd[1523]: time="2026-04-28T02:49:27.818557682Z" level=info msg="StopPodSandbox for \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" returns successfully" Apr 28 02:49:27.820547 containerd[1523]: time="2026-04-28T02:49:27.820231357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-d9nd5,Uid:3761dcc7-adab-40a0-94ad-c80888682a66,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.030 [INFO][3915] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.031 [INFO][3915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" iface="eth0" netns="/var/run/netns/cni-571df37e-1f4c-ce24-5686-fccd8db2f83f" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.032 [INFO][3915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" iface="eth0" netns="/var/run/netns/cni-571df37e-1f4c-ce24-5686-fccd8db2f83f" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.043 [INFO][3915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" iface="eth0" netns="/var/run/netns/cni-571df37e-1f4c-ce24-5686-fccd8db2f83f" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.043 [INFO][3915] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.045 [INFO][3915] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.415 [INFO][3974] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.415 [INFO][3974] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.761 [INFO][3974] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.800 [WARNING][3974] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.800 [INFO][3974] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.803 [INFO][3974] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:27.831838 containerd[1523]: 2026-04-28 02:49:27.826 [INFO][3915] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:27.833352 containerd[1523]: time="2026-04-28T02:49:27.832132610Z" level=info msg="TearDown network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" successfully" Apr 28 02:49:27.833352 containerd[1523]: time="2026-04-28T02:49:27.832165358Z" level=info msg="StopPodSandbox for \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" returns successfully" Apr 28 02:49:27.836323 containerd[1523]: time="2026-04-28T02:49:27.834008551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6685bb88-n77bq,Uid:8c1c770b-95e1-4efa-8dd8-c75266e36ef1,Namespace:calico-system,Attempt:1,}" Apr 28 02:49:28.222792 systemd-networkd[1442]: calie258a00299b: Link UP Apr 28 02:49:28.225807 systemd-networkd[1442]: calie258a00299b: Gained carrier Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.656 [ERROR][4021] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.721 [INFO][4021] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0 calico-apiserver-6c564fdf9d- calico-system 8eb06395-ddec-47db-811d-5529c83facdc 935 0 2026-04-28 02:48:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c564fdf9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com calico-apiserver-6c564fdf9d-fpdb9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie258a00299b [] [] }} ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.723 [INFO][4021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.841 [INFO][4086] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" HandleID="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.873 [INFO][4086] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" HandleID="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ef9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"calico-apiserver-6c564fdf9d-fpdb9", "timestamp":"2026-04-28 02:49:27.841405077 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f42c0)} Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.873 [INFO][4086] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.874 [INFO][4086] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.874 [INFO][4086] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.896 [INFO][4086] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.917 [INFO][4086] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.951 [INFO][4086] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.975 [INFO][4086] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.988 [INFO][4086] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:27.988 [INFO][4086] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.006 [INFO][4086] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1 Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.037 [INFO][4086] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.083 [INFO][4086] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.65/26] block=192.168.14.64/26 handle="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.086 [INFO][4086] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.65/26] handle="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.098 [INFO][4086] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:28.291844 containerd[1523]: 2026-04-28 02:49:28.098 [INFO][4086] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.65/26] IPv6=[] ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" HandleID="k8s-pod-network.84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.125 [INFO][4021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8eb06395-ddec-47db-811d-5529c83facdc", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6c564fdf9d-fpdb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie258a00299b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.131 [INFO][4021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.65/32] ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.131 [INFO][4021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie258a00299b ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.239 [INFO][4021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.241 [INFO][4021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8eb06395-ddec-47db-811d-5529c83facdc", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1", Pod:"calico-apiserver-6c564fdf9d-fpdb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie258a00299b", MAC:"be:27:5c:34:86:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.298339 containerd[1523]: 2026-04-28 02:49:28.269 [INFO][4021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-fpdb9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:28.429775 systemd-networkd[1442]: calia3b5ec31039: Link UP Apr 28 02:49:28.436279 systemd-networkd[1442]: calia3b5ec31039: Gained carrier Apr 28 02:49:28.485512 systemd[1]: run-containerd-runc-k8s.io-767a4cc08cdfb9110d37745351b3f2e4bfd09366ef5b287b7a4b352157b733c5-runc.jVwyb5.mount: Deactivated successfully. Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:27.768 [ERROR][4056] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:27.834 [INFO][4056] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0 coredns-674b8bbfcf- kube-system 4be42816-d109-44ed-99ff-f1618cbf739e 928 0 2026-04-28 02:48:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com coredns-674b8bbfcf-ndml9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3b5ec31039 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:27.836 [INFO][4056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.214 [INFO][4100] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" HandleID="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.263 [INFO][4100] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" HandleID="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005de360), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-ndml9", "timestamp":"2026-04-28 02:49:28.214474395 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e7b80)} Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.263 [INFO][4100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.272 [INFO][4100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.272 [INFO][4100] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.285 [INFO][4100] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.302 [INFO][4100] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.328 [INFO][4100] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.345 [INFO][4100] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.351 [INFO][4100] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.352 [INFO][4100] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.359 [INFO][4100] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5 Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.373 [INFO][4100] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.388 [INFO][4100] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.66/26] block=192.168.14.64/26 handle="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.389 [INFO][4100] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.66/26] handle="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.389 [INFO][4100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:28.489001 containerd[1523]: 2026-04-28 02:49:28.389 [INFO][4100] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.66/26] IPv6=[] ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" HandleID="k8s-pod-network.8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.405 [INFO][4056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4be42816-d109-44ed-99ff-f1618cbf739e", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-ndml9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3b5ec31039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.405 [INFO][4056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.66/32] ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.405 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3b5ec31039 ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.434 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.442 [INFO][4056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4be42816-d109-44ed-99ff-f1618cbf739e", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5", Pod:"coredns-674b8bbfcf-ndml9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3b5ec31039", MAC:"6a:18:cd:56:26:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.495333 containerd[1523]: 2026-04-28 02:49:28.477 [INFO][4056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ndml9" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:28.489352 systemd[1]: run-netns-cni\x2d571df37e\x2d1f4c\x2dce24\x2d5686\x2dfccd8db2f83f.mount: Deactivated successfully. Apr 28 02:49:28.490704 systemd[1]: run-netns-cni\x2d2524be5c\x2d7798\x2d2be2\x2d9f6c\x2d436f76711251.mount: Deactivated successfully. Apr 28 02:49:28.490867 systemd[1]: run-netns-cni\x2d16e55e26\x2dc4a9\x2db2f4\x2d5772\x2d6a30b353a783.mount: Deactivated successfully. Apr 28 02:49:28.490991 systemd[1]: run-netns-cni\x2db2ca73d3\x2d93c1\x2d827a\x2df7af\x2d5fd138b5ecad.mount: Deactivated successfully. Apr 28 02:49:28.491094 systemd[1]: run-netns-cni\x2d7ab65a2c\x2d8383\x2ddca7\x2d14c6\x2dfc2624dd84d7.mount: Deactivated successfully. Apr 28 02:49:28.491232 systemd[1]: run-netns-cni\x2d04f67ac8\x2d7f57\x2d92db\x2dfaaa\x2d4154d7f6fc2e.mount: Deactivated successfully. Apr 28 02:49:28.632733 systemd-networkd[1442]: cali79a47156ba6: Link UP Apr 28 02:49:28.635728 systemd-networkd[1442]: cali79a47156ba6: Gained carrier Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:27.970 [ERROR][4040] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.079 [INFO][4040] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0 calico-kube-controllers-55d9f7668- calico-system 5ea8300d-6e47-4707-97ba-70635cc935f5 929 0 2026-04-28 02:49:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55d9f7668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com calico-kube-controllers-55d9f7668-qwcld eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali79a47156ba6 [] [] }} ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.079 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.406 [INFO][4154] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" HandleID="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.449 [INFO][4154] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" HandleID="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000129410), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"calico-kube-controllers-55d9f7668-qwcld", "timestamp":"2026-04-28 02:49:28.406139381 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000278000)} Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.461 [INFO][4154] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.485 [INFO][4154] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.485 [INFO][4154] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.503 [INFO][4154] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.515 [INFO][4154] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.544 [INFO][4154] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.550 [INFO][4154] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.555 [INFO][4154] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.556 [INFO][4154] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.563 [INFO][4154] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471 Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.581 [INFO][4154] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.596 [INFO][4154] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.67/26] block=192.168.14.64/26 handle="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.596 [INFO][4154] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.67/26] handle="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.596 [INFO][4154] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:28.695337 containerd[1523]: 2026-04-28 02:49:28.597 [INFO][4154] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.67/26] IPv6=[] ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" HandleID="k8s-pod-network.fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.607 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0", GenerateName:"calico-kube-controllers-55d9f7668-", Namespace:"calico-system", SelfLink:"", UID:"5ea8300d-6e47-4707-97ba-70635cc935f5", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d9f7668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-55d9f7668-qwcld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79a47156ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.608 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.67/32] ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.608 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79a47156ba6 ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.635 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.639 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0", GenerateName:"calico-kube-controllers-55d9f7668-", Namespace:"calico-system", SelfLink:"", UID:"5ea8300d-6e47-4707-97ba-70635cc935f5", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d9f7668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471", Pod:"calico-kube-controllers-55d9f7668-qwcld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79a47156ba6", MAC:"36:a4:a4:f4:a9:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.696471 containerd[1523]: 2026-04-28 02:49:28.685 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471" Namespace="calico-system" Pod="calico-kube-controllers-55d9f7668-qwcld" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:28.739463 containerd[1523]: time="2026-04-28T02:49:28.738327075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:28.739463 containerd[1523]: time="2026-04-28T02:49:28.738472762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:28.739463 containerd[1523]: time="2026-04-28T02:49:28.738511783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.739463 containerd[1523]: time="2026-04-28T02:49:28.738727151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.766793 containerd[1523]: time="2026-04-28T02:49:28.766251327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:28.767127 containerd[1523]: time="2026-04-28T02:49:28.766755564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:28.767440 containerd[1523]: time="2026-04-28T02:49:28.767371583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.768309 containerd[1523]: time="2026-04-28T02:49:28.768107974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.823884 systemd[1]: Started cri-containerd-8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5.scope - libcontainer container 8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5. Apr 28 02:49:28.869070 systemd[1]: Started cri-containerd-84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1.scope - libcontainer container 84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1. Apr 28 02:49:28.885215 containerd[1523]: time="2026-04-28T02:49:28.885081930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:28.888646 containerd[1523]: time="2026-04-28T02:49:28.885571077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:28.888646 containerd[1523]: time="2026-04-28T02:49:28.885641915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.888646 containerd[1523]: time="2026-04-28T02:49:28.885990406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:28.893590 systemd-networkd[1442]: calidb431195524: Link UP Apr 28 02:49:28.914969 systemd-networkd[1442]: calidb431195524: Gained carrier Apr 28 02:49:28.957757 systemd[1]: Started cri-containerd-fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471.scope - libcontainer container fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471. Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.283 [ERROR][4101] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.323 [INFO][4101] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0 csi-node-driver- calico-system b51c0c3e-fb85-4791-a4da-124042c0f74d 934 0 2026-04-28 02:49:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:74865c565 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com csi-node-driver-r758r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidb431195524 [] [] }} ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.324 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.610 [INFO][4183] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" HandleID="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.657 [INFO][4183] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" HandleID="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102160), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"csi-node-driver-r758r", "timestamp":"2026-04-28 02:49:28.610519691 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005d6dc0)} Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.657 [INFO][4183] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.657 [INFO][4183] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.657 [INFO][4183] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.674 [INFO][4183] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.697 [INFO][4183] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.729 [INFO][4183] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.739 [INFO][4183] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.752 [INFO][4183] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.752 [INFO][4183] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.775 [INFO][4183] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665 Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.821 [INFO][4183] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.873 [INFO][4183] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.68/26] block=192.168.14.64/26 handle="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.876 [INFO][4183] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.68/26] handle="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.877 [INFO][4183] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:28.986827 containerd[1523]: 2026-04-28 02:49:28.877 [INFO][4183] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.68/26] IPv6=[] ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" HandleID="k8s-pod-network.3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.888 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b51c0c3e-fb85-4791-a4da-124042c0f74d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-r758r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb431195524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.888 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.68/32] ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.888 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb431195524 ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.946 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.952 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b51c0c3e-fb85-4791-a4da-124042c0f74d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665", Pod:"csi-node-driver-r758r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb431195524", MAC:"82:ab:ef:1c:a2:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:28.988951 containerd[1523]: 2026-04-28 02:49:28.980 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665" Namespace="calico-system" Pod="csi-node-driver-r758r" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:29.025199 systemd-networkd[1442]: cali2d93a58d64d: Link UP Apr 28 02:49:29.032146 systemd-networkd[1442]: cali2d93a58d64d: Gained carrier Apr 28 02:49:29.079308 containerd[1523]: time="2026-04-28T02:49:29.079237442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ndml9,Uid:4be42816-d109-44ed-99ff-f1618cbf739e,Namespace:kube-system,Attempt:1,} returns sandbox id \"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5\"" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:27.939 [ERROR][4052] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.147 [INFO][4052] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0 coredns-674b8bbfcf- kube-system 53cc779b-07b6-4618-82bd-00d7d06d83e0 933 0 2026-04-28 02:48:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com coredns-674b8bbfcf-v8l6s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d93a58d64d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.147 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.628 [INFO][4165] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" HandleID="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.679 [INFO][4165] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" HandleID="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f2450), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-v8l6s", "timestamp":"2026-04-28 02:49:28.628795047 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039a160)} Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.680 [INFO][4165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.878 [INFO][4165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.878 [INFO][4165] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.895 [INFO][4165] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.919 [INFO][4165] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.934 [INFO][4165] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.952 [INFO][4165] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.964 [INFO][4165] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.964 [INFO][4165] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.968 [INFO][4165] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49 Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:28.977 [INFO][4165] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:29.002 [INFO][4165] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.69/26] block=192.168.14.64/26 handle="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:29.002 [INFO][4165] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.69/26] handle="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:29.002 [INFO][4165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:29.098875 containerd[1523]: 2026-04-28 02:49:29.002 [INFO][4165] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.69/26] IPv6=[] ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" HandleID="k8s-pod-network.65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.019 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53cc779b-07b6-4618-82bd-00d7d06d83e0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-v8l6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d93a58d64d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.019 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.69/32] ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.019 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d93a58d64d ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.039 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.045 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53cc779b-07b6-4618-82bd-00d7d06d83e0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49", Pod:"coredns-674b8bbfcf-v8l6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d93a58d64d", MAC:"66:15:03:18:3e:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.110286 containerd[1523]: 2026-04-28 02:49:29.084 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8l6s" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:29.110286 containerd[1523]: time="2026-04-28T02:49:29.109856374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:29.111704 containerd[1523]: time="2026-04-28T02:49:29.109948398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:29.111704 containerd[1523]: time="2026-04-28T02:49:29.109979995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.111704 containerd[1523]: time="2026-04-28T02:49:29.110142733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.114789 containerd[1523]: time="2026-04-28T02:49:29.114327359Z" level=info msg="CreateContainer within sandbox \"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:49:29.168085 systemd-networkd[1442]: cali079cff76e9b: Link UP Apr 28 02:49:29.198973 systemd-networkd[1442]: cali079cff76e9b: Gained carrier Apr 28 02:49:29.251703 containerd[1523]: time="2026-04-28T02:49:29.249989677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:29.251703 containerd[1523]: time="2026-04-28T02:49:29.251169852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:29.252512 containerd[1523]: time="2026-04-28T02:49:29.251677205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.261675 containerd[1523]: time="2026-04-28T02:49:29.260908298Z" level=info msg="CreateContainer within sandbox \"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf7afbc039704f590edd4cff3aff730dd81e16a4e064b9fe8a6fb0f028656af3\"" Apr 28 02:49:29.266386 containerd[1523]: time="2026-04-28T02:49:29.266325358Z" level=info msg="StartContainer for \"bf7afbc039704f590edd4cff3aff730dd81e16a4e064b9fe8a6fb0f028656af3\"" Apr 28 02:49:29.268168 systemd[1]: Started cri-containerd-3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665.scope - libcontainer container 3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665. Apr 28 02:49:29.271078 containerd[1523]: time="2026-04-28T02:49:29.270710227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.306 [ERROR][4120] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.353 [INFO][4120] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0 calico-apiserver-6c564fdf9d- calico-system 8fe71756-1731-4d53-9ef9-9f0198a1b0e5 936 0 2026-04-28 02:48:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c564fdf9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com calico-apiserver-6c564fdf9d-86vnc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali079cff76e9b [] [] }} ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.354 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.654 [INFO][4189] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" HandleID="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.701 [INFO][4189] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" HandleID="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"calico-apiserver-6c564fdf9d-86vnc", "timestamp":"2026-04-28 02:49:28.654315132 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000442000)} Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:28.701 [INFO][4189] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.003 [INFO][4189] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.009 [INFO][4189] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.019 [INFO][4189] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.040 [INFO][4189] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.057 [INFO][4189] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.068 [INFO][4189] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.075 [INFO][4189] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.076 [INFO][4189] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.086 [INFO][4189] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.103 [INFO][4189] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.132 [INFO][4189] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.70/26] block=192.168.14.64/26 handle="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.133 [INFO][4189] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.70/26] handle="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.134 [INFO][4189] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:29.273166 containerd[1523]: 2026-04-28 02:49:29.134 [INFO][4189] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.70/26] IPv6=[] ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" HandleID="k8s-pod-network.fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.154 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8fe71756-1731-4d53-9ef9-9f0198a1b0e5", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6c564fdf9d-86vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali079cff76e9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.154 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.70/32] ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.154 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali079cff76e9b ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.205 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.210 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8fe71756-1731-4d53-9ef9-9f0198a1b0e5", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f", Pod:"calico-apiserver-6c564fdf9d-86vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali079cff76e9b", MAC:"fe:b4:c5:d3:56:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.275388 containerd[1523]: 2026-04-28 02:49:29.254 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f" Namespace="calico-system" Pod="calico-apiserver-6c564fdf9d-86vnc" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:29.346160 systemd-networkd[1442]: cali5e75f9d16ec: Link UP Apr 28 02:49:29.349189 systemd-networkd[1442]: cali5e75f9d16ec: Gained carrier Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.398 [ERROR][4130] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.486 [INFO][4130] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0 whisker-5c6685bb88- calico-system 8c1c770b-95e1-4efa-8dd8-c75266e36ef1 932 0 2026-04-28 02:49:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c6685bb88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com whisker-5c6685bb88-n77bq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5e75f9d16ec [] [] }} ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.486 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.736 [INFO][4205] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.779 [INFO][4205] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381de0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"whisker-5c6685bb88-n77bq", "timestamp":"2026-04-28 02:49:28.736686617 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00046b080)} Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:28.779 [INFO][4205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.134 [INFO][4205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.134 [INFO][4205] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.137 [INFO][4205] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.165 [INFO][4205] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.224 [INFO][4205] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.231 [INFO][4205] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.239 [INFO][4205] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.243 [INFO][4205] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.250 [INFO][4205] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527 Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.272 [INFO][4205] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.306 [INFO][4205] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.71/26] block=192.168.14.64/26 handle="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.306 [INFO][4205] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.71/26] handle="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.306 [INFO][4205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:29.399037 containerd[1523]: 2026-04-28 02:49:29.306 [INFO][4205] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.71/26] IPv6=[] ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.328 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0", GenerateName:"whisker-5c6685bb88-", Namespace:"calico-system", SelfLink:"", UID:"8c1c770b-95e1-4efa-8dd8-c75266e36ef1", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6685bb88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5c6685bb88-n77bq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e75f9d16ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.329 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.71/32] ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.329 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e75f9d16ec ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.357 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.358 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0", GenerateName:"whisker-5c6685bb88-", Namespace:"calico-system", SelfLink:"", UID:"8c1c770b-95e1-4efa-8dd8-c75266e36ef1", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6685bb88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527", Pod:"whisker-5c6685bb88-n77bq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e75f9d16ec", MAC:"1e:ee:e6:90:11:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.402110 containerd[1523]: 2026-04-28 02:49:29.384 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Namespace="calico-system" Pod="whisker-5c6685bb88-n77bq" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:29.411937 systemd[1]: Started cri-containerd-65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49.scope - libcontainer container 65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49. Apr 28 02:49:29.416594 containerd[1523]: time="2026-04-28T02:49:29.416209587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-fpdb9,Uid:8eb06395-ddec-47db-811d-5529c83facdc,Namespace:calico-system,Attempt:1,} returns sandbox id \"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1\"" Apr 28 02:49:29.426673 containerd[1523]: time="2026-04-28T02:49:29.426580569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 02:49:29.562749 containerd[1523]: time="2026-04-28T02:49:29.556062716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:29.562749 containerd[1523]: time="2026-04-28T02:49:29.560467607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:29.562749 containerd[1523]: time="2026-04-28T02:49:29.562696576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.564482 containerd[1523]: time="2026-04-28T02:49:29.562883419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.566977 systemd-networkd[1442]: cali6fc17693ef8: Link UP Apr 28 02:49:29.569759 systemd-networkd[1442]: cali6fc17693ef8: Gained carrier Apr 28 02:49:29.662381 systemd[1]: Started cri-containerd-bf7afbc039704f590edd4cff3aff730dd81e16a4e064b9fe8a6fb0f028656af3.scope - libcontainer container bf7afbc039704f590edd4cff3aff730dd81e16a4e064b9fe8a6fb0f028656af3. Apr 28 02:49:29.682809 containerd[1523]: time="2026-04-28T02:49:29.677207514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:29.682809 containerd[1523]: time="2026-04-28T02:49:29.677310092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:29.682809 containerd[1523]: time="2026-04-28T02:49:29.677328044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.682809 containerd[1523]: time="2026-04-28T02:49:29.677480164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.481 [ERROR][4134] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.555 [INFO][4134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0 goldmane-57885fdd4c- calico-system 3761dcc7-adab-40a0-94ad-c80888682a66 931 0 2026-04-28 02:48:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:57885fdd4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com goldmane-57885fdd4c-d9nd5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6fc17693ef8 [] [] }} ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.555 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.755 [INFO][4211] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" HandleID="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.805 [INFO][4211] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" HandleID="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3990), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"goldmane-57885fdd4c-d9nd5", "timestamp":"2026-04-28 02:49:28.755157096 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042ac60)} Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:28.805 [INFO][4211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.313 [INFO][4211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.314 [INFO][4211] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.334 [INFO][4211] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.380 [INFO][4211] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.407 [INFO][4211] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.425 [INFO][4211] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.436 [INFO][4211] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.438 [INFO][4211] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.447 [INFO][4211] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58 Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.491 [INFO][4211] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.538 [INFO][4211] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.72/26] block=192.168.14.64/26 handle="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.539 [INFO][4211] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.72/26] handle="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.539 [INFO][4211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:29.703198 containerd[1523]: 2026-04-28 02:49:29.539 [INFO][4211] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.72/26] IPv6=[] ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" HandleID="k8s-pod-network.36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.547 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"3761dcc7-adab-40a0-94ad-c80888682a66", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-57885fdd4c-d9nd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fc17693ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.548 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.72/32] ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.548 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc17693ef8 ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.579 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.622 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"3761dcc7-adab-40a0-94ad-c80888682a66", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58", Pod:"goldmane-57885fdd4c-d9nd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fc17693ef8", MAC:"da:6d:9e:42:25:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:29.711287 containerd[1523]: 2026-04-28 02:49:29.673 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58" Namespace="calico-system" Pod="goldmane-57885fdd4c-d9nd5" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:29.723560 containerd[1523]: time="2026-04-28T02:49:29.723505221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d9f7668-qwcld,Uid:5ea8300d-6e47-4707-97ba-70635cc935f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471\"" Apr 28 02:49:29.739887 systemd-networkd[1442]: calie258a00299b: Gained IPv6LL Apr 28 02:49:29.743653 containerd[1523]: time="2026-04-28T02:49:29.743073062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8l6s,Uid:53cc779b-07b6-4618-82bd-00d7d06d83e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49\"" Apr 28 02:49:29.768681 containerd[1523]: time="2026-04-28T02:49:29.768300407Z" level=info msg="CreateContainer within sandbox \"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 02:49:29.775055 containerd[1523]: time="2026-04-28T02:49:29.774519322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r758r,Uid:b51c0c3e-fb85-4791-a4da-124042c0f74d,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665\"" Apr 28 02:49:29.778901 systemd[1]: Started cri-containerd-9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527.scope - libcontainer container 9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527. Apr 28 02:49:29.844673 systemd[1]: Started cri-containerd-fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f.scope - libcontainer container fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f. Apr 28 02:49:29.852655 containerd[1523]: time="2026-04-28T02:49:29.852310250Z" level=info msg="StartContainer for \"bf7afbc039704f590edd4cff3aff730dd81e16a4e064b9fe8a6fb0f028656af3\" returns successfully" Apr 28 02:49:29.865438 systemd-networkd[1442]: cali79a47156ba6: Gained IPv6LL Apr 28 02:49:29.873150 containerd[1523]: time="2026-04-28T02:49:29.872182266Z" level=info msg="CreateContainer within sandbox \"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1475d2cee39cb451a905ebacebdd394049ba6f734a30aa7d7dc73b7715d922c8\"" Apr 28 02:49:29.874482 containerd[1523]: time="2026-04-28T02:49:29.873945971Z" level=info msg="StartContainer for \"1475d2cee39cb451a905ebacebdd394049ba6f734a30aa7d7dc73b7715d922c8\"" Apr 28 02:49:29.936563 containerd[1523]: time="2026-04-28T02:49:29.935316400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:49:29.936563 containerd[1523]: time="2026-04-28T02:49:29.935435299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:49:29.936563 containerd[1523]: time="2026-04-28T02:49:29.935461386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:29.939223 containerd[1523]: time="2026-04-28T02:49:29.938103404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:49:30.039892 systemd[1]: Started cri-containerd-1475d2cee39cb451a905ebacebdd394049ba6f734a30aa7d7dc73b7715d922c8.scope - libcontainer container 1475d2cee39cb451a905ebacebdd394049ba6f734a30aa7d7dc73b7715d922c8. Apr 28 02:49:30.050216 systemd[1]: Started cri-containerd-36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58.scope - libcontainer container 36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58. Apr 28 02:49:30.131962 containerd[1523]: time="2026-04-28T02:49:30.131745166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6685bb88-n77bq,Uid:8c1c770b-95e1-4efa-8dd8-c75266e36ef1,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\"" Apr 28 02:49:30.150804 containerd[1523]: time="2026-04-28T02:49:30.149692369Z" level=info msg="StartContainer for \"1475d2cee39cb451a905ebacebdd394049ba6f734a30aa7d7dc73b7715d922c8\" returns successfully" Apr 28 02:49:30.314125 systemd-networkd[1442]: calia3b5ec31039: Gained IPv6LL Apr 28 02:49:30.316518 systemd-networkd[1442]: cali2d93a58d64d: Gained IPv6LL Apr 28 02:49:30.319387 systemd-networkd[1442]: calidb431195524: Gained IPv6LL Apr 28 02:49:30.383378 containerd[1523]: time="2026-04-28T02:49:30.381215145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c564fdf9d-86vnc,Uid:8fe71756-1731-4d53-9ef9-9f0198a1b0e5,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f\"" Apr 28 02:49:30.440934 systemd-networkd[1442]: cali079cff76e9b: Gained IPv6LL Apr 28 02:49:30.520758 containerd[1523]: time="2026-04-28T02:49:30.520313247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-d9nd5,Uid:3761dcc7-adab-40a0-94ad-c80888682a66,Namespace:calico-system,Attempt:1,} returns sandbox id \"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58\"" Apr 28 02:49:30.571777 systemd-networkd[1442]: cali5e75f9d16ec: Gained IPv6LL Apr 28 02:49:30.799675 kubelet[2688]: I0428 02:49:30.799530 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v8l6s" podStartSLOduration=45.799491655 podStartE2EDuration="45.799491655s" podCreationTimestamp="2026-04-28 02:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:49:30.739035717 +0000 UTC m=+51.246949676" watchObservedRunningTime="2026-04-28 02:49:30.799491655 +0000 UTC m=+51.307405667" Apr 28 02:49:30.828054 kubelet[2688]: I0428 02:49:30.827672 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ndml9" podStartSLOduration=45.827621087 podStartE2EDuration="45.827621087s" podCreationTimestamp="2026-04-28 02:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:49:30.810947601 +0000 UTC m=+51.318861560" watchObservedRunningTime="2026-04-28 02:49:30.827621087 +0000 UTC m=+51.335535080" Apr 28 02:49:31.273090 systemd-networkd[1442]: cali6fc17693ef8: Gained IPv6LL Apr 28 02:49:31.324663 kernel: calico-node[4684]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 28 02:49:32.998696 systemd-networkd[1442]: vxlan.calico: Link UP Apr 28 02:49:32.998710 systemd-networkd[1442]: vxlan.calico: Gained carrier Apr 28 02:49:34.345596 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Apr 28 02:49:35.193699 containerd[1523]: time="2026-04-28T02:49:35.157792564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 28 02:49:35.220754 containerd[1523]: time="2026-04-28T02:49:35.220672537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 5.79398287s" Apr 28 02:49:35.226815 containerd[1523]: time="2026-04-28T02:49:35.225509202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:35.226815 containerd[1523]: time="2026-04-28T02:49:35.226410754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 28 02:49:35.229867 containerd[1523]: time="2026-04-28T02:49:35.226811361Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:35.229867 containerd[1523]: time="2026-04-28T02:49:35.227894913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:35.296041 containerd[1523]: time="2026-04-28T02:49:35.295894585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 28 02:49:35.408340 containerd[1523]: time="2026-04-28T02:49:35.408284508Z" level=info msg="CreateContainer within sandbox \"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 02:49:35.454128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109380150.mount: Deactivated successfully. Apr 28 02:49:35.470868 containerd[1523]: time="2026-04-28T02:49:35.470793998Z" level=info msg="CreateContainer within sandbox \"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e\"" Apr 28 02:49:35.480526 containerd[1523]: time="2026-04-28T02:49:35.479495324Z" level=info msg="StartContainer for \"37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e\"" Apr 28 02:49:35.631479 systemd[1]: run-containerd-runc-k8s.io-37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e-runc.m0NQ6S.mount: Deactivated successfully. Apr 28 02:49:35.661852 systemd[1]: Started cri-containerd-37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e.scope - libcontainer container 37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e. Apr 28 02:49:35.761563 containerd[1523]: time="2026-04-28T02:49:35.761329574Z" level=info msg="StartContainer for \"37d1c36aa3e368b1c10808e863c8196e029ba7694c65468891f48a101ae5bd7e\" returns successfully" Apr 28 02:49:37.009927 kubelet[2688]: I0428 02:49:36.976325 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c564fdf9d-fpdb9" podStartSLOduration=32.127283539 podStartE2EDuration="37.976244566s" podCreationTimestamp="2026-04-28 02:48:59 +0000 UTC" firstStartedPulling="2026-04-28 02:49:29.419342676 +0000 UTC m=+49.927256621" lastFinishedPulling="2026-04-28 02:49:35.26830367 +0000 UTC m=+55.776217648" observedRunningTime="2026-04-28 02:49:36.97601564 +0000 UTC m=+57.483929599" watchObservedRunningTime="2026-04-28 02:49:36.976244566 +0000 UTC m=+57.484158516" Apr 28 02:49:37.804458 kubelet[2688]: I0428 02:49:37.803766 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:49:40.488098 containerd[1523]: time="2026-04-28T02:49:40.484289888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 28 02:49:40.498195 containerd[1523]: time="2026-04-28T02:49:40.485101903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:40.508678 containerd[1523]: time="2026-04-28T02:49:40.506885351Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:40.531811 containerd[1523]: time="2026-04-28T02:49:40.531748843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:40.539468 containerd[1523]: time="2026-04-28T02:49:40.539348367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 5.243283492s" Apr 28 02:49:40.540381 containerd[1523]: time="2026-04-28T02:49:40.540347557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 28 02:49:40.713587 containerd[1523]: time="2026-04-28T02:49:40.713218459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 28 02:49:40.722893 containerd[1523]: time="2026-04-28T02:49:40.722853811Z" level=info msg="StopPodSandbox for \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\"" Apr 28 02:49:40.783977 containerd[1523]: time="2026-04-28T02:49:40.783815998Z" level=info msg="CreateContainer within sandbox \"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 28 02:49:40.820133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874483032.mount: Deactivated successfully. Apr 28 02:49:40.822827 containerd[1523]: time="2026-04-28T02:49:40.821900578Z" level=info msg="CreateContainer within sandbox \"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316\"" Apr 28 02:49:40.823966 containerd[1523]: time="2026-04-28T02:49:40.823932642Z" level=info msg="StartContainer for \"e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316\"" Apr 28 02:49:40.959851 systemd[1]: Started cri-containerd-e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316.scope - libcontainer container e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316. Apr 28 02:49:41.051722 containerd[1523]: time="2026-04-28T02:49:41.051430825Z" level=info msg="StartContainer for \"e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316\" returns successfully" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.279 [WARNING][4962] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8fe71756-1731-4d53-9ef9-9f0198a1b0e5", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f", Pod:"calico-apiserver-6c564fdf9d-86vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali079cff76e9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.282 [INFO][4962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.282 [INFO][4962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" iface="eth0" netns="" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.282 [INFO][4962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.282 [INFO][4962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.526 [INFO][5014] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.529 [INFO][5014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.530 [INFO][5014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.551 [WARNING][5014] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.552 [INFO][5014] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.554 [INFO][5014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:41.558806 containerd[1523]: 2026-04-28 02:49:41.556 [INFO][4962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.562352 containerd[1523]: time="2026-04-28T02:49:41.558894549Z" level=info msg="TearDown network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" successfully" Apr 28 02:49:41.562352 containerd[1523]: time="2026-04-28T02:49:41.558935454Z" level=info msg="StopPodSandbox for \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" returns successfully" Apr 28 02:49:41.599783 containerd[1523]: time="2026-04-28T02:49:41.599700245Z" level=info msg="RemovePodSandbox for \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\"" Apr 28 02:49:41.601127 containerd[1523]: time="2026-04-28T02:49:41.601054954Z" level=info msg="Forcibly stopping sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\"" Apr 28 02:49:41.729029 kubelet[2688]: I0428 02:49:41.722535 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55d9f7668-qwcld" podStartSLOduration=29.751394806 podStartE2EDuration="40.72248459s" podCreationTimestamp="2026-04-28 02:49:01 +0000 UTC" firstStartedPulling="2026-04-28 02:49:29.740895189 +0000 UTC m=+50.248809135" lastFinishedPulling="2026-04-28 02:49:40.71198496 +0000 UTC m=+61.219898919" observedRunningTime="2026-04-28 02:49:41.722020645 +0000 UTC m=+62.229934620" watchObservedRunningTime="2026-04-28 02:49:41.72248459 +0000 UTC m=+62.230398541" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.660 [WARNING][5028] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8fe71756-1731-4d53-9ef9-9f0198a1b0e5", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f", Pod:"calico-apiserver-6c564fdf9d-86vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali079cff76e9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.660 [INFO][5028] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.660 [INFO][5028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" iface="eth0" netns="" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.660 [INFO][5028] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.660 [INFO][5028] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.714 [INFO][5035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.714 [INFO][5035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.715 [INFO][5035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.741 [WARNING][5035] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.741 [INFO][5035] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" HandleID="k8s-pod-network.e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--86vnc-eth0" Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.746 [INFO][5035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:41.761940 containerd[1523]: 2026-04-28 02:49:41.755 [INFO][5028] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9" Apr 28 02:49:41.761940 containerd[1523]: time="2026-04-28T02:49:41.760814285Z" level=info msg="TearDown network for sandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" successfully" Apr 28 02:49:41.876404 containerd[1523]: time="2026-04-28T02:49:41.874309892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:41.883541 containerd[1523]: time="2026-04-28T02:49:41.880580355Z" level=info msg="RemovePodSandbox \"e0195bd374120094a901d6e44c271ad675f4a5a380a3e1b54b431b37ff71e2a9\" returns successfully" Apr 28 02:49:41.883541 containerd[1523]: time="2026-04-28T02:49:41.883200889Z" level=info msg="StopPodSandbox for \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\"" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:41.978 [WARNING][5071] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0", GenerateName:"whisker-5c6685bb88-", Namespace:"calico-system", SelfLink:"", UID:"8c1c770b-95e1-4efa-8dd8-c75266e36ef1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6685bb88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527", Pod:"whisker-5c6685bb88-n77bq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e75f9d16ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:41.978 [INFO][5071] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:41.978 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" iface="eth0" netns="" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:41.978 [INFO][5071] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:41.978 [INFO][5071] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.041 [INFO][5078] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.042 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.042 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.054 [WARNING][5078] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.054 [INFO][5078] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.056 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.061259 containerd[1523]: 2026-04-28 02:49:42.059 [INFO][5071] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.062969 containerd[1523]: time="2026-04-28T02:49:42.062377217Z" level=info msg="TearDown network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" successfully" Apr 28 02:49:42.062969 containerd[1523]: time="2026-04-28T02:49:42.062437021Z" level=info msg="StopPodSandbox for \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" returns successfully" Apr 28 02:49:42.064198 containerd[1523]: time="2026-04-28T02:49:42.064154501Z" level=info msg="RemovePodSandbox for \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\"" Apr 28 02:49:42.064371 containerd[1523]: time="2026-04-28T02:49:42.064342896Z" level=info msg="Forcibly stopping sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\"" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.138 [WARNING][5092] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0", GenerateName:"whisker-5c6685bb88-", Namespace:"calico-system", SelfLink:"", UID:"8c1c770b-95e1-4efa-8dd8-c75266e36ef1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6685bb88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527", Pod:"whisker-5c6685bb88-n77bq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e75f9d16ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.138 [INFO][5092] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.138 [INFO][5092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" iface="eth0" netns="" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.138 [INFO][5092] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.138 [INFO][5092] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.192 [INFO][5099] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.192 [INFO][5099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.192 [INFO][5099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.203 [WARNING][5099] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.203 [INFO][5099] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" HandleID="k8s-pod-network.3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.205 [INFO][5099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.210955 containerd[1523]: 2026-04-28 02:49:42.208 [INFO][5092] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6" Apr 28 02:49:42.210955 containerd[1523]: time="2026-04-28T02:49:42.210899454Z" level=info msg="TearDown network for sandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" successfully" Apr 28 02:49:42.218854 containerd[1523]: time="2026-04-28T02:49:42.218784333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:42.218993 containerd[1523]: time="2026-04-28T02:49:42.218877636Z" level=info msg="RemovePodSandbox \"3b8b5b900a33654762b3696590b0f33c571e1836eba1bc63f860400b061aa9f6\" returns successfully" Apr 28 02:49:42.219875 containerd[1523]: time="2026-04-28T02:49:42.219843114Z" level=info msg="StopPodSandbox for \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\"" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.282 [WARNING][5113] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8eb06395-ddec-47db-811d-5529c83facdc", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1", Pod:"calico-apiserver-6c564fdf9d-fpdb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie258a00299b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.283 [INFO][5113] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.283 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" iface="eth0" netns="" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.283 [INFO][5113] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.283 [INFO][5113] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.333 [INFO][5120] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.333 [INFO][5120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.333 [INFO][5120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.346 [WARNING][5120] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.346 [INFO][5120] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.349 [INFO][5120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.353886 containerd[1523]: 2026-04-28 02:49:42.351 [INFO][5113] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.355090 containerd[1523]: time="2026-04-28T02:49:42.354104905Z" level=info msg="TearDown network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" successfully" Apr 28 02:49:42.355090 containerd[1523]: time="2026-04-28T02:49:42.354161919Z" level=info msg="StopPodSandbox for \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" returns successfully" Apr 28 02:49:42.356687 containerd[1523]: time="2026-04-28T02:49:42.356143648Z" level=info msg="RemovePodSandbox for \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\"" Apr 28 02:49:42.356687 containerd[1523]: time="2026-04-28T02:49:42.356185597Z" level=info msg="Forcibly stopping sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\"" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.420 [WARNING][5134] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0", GenerateName:"calico-apiserver-6c564fdf9d-", Namespace:"calico-system", SelfLink:"", UID:"8eb06395-ddec-47db-811d-5529c83facdc", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c564fdf9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"84977bae612310bd923cbe0d9561f18bd7e33d75a9a23d766b7c367fc89fb9a1", Pod:"calico-apiserver-6c564fdf9d-fpdb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie258a00299b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.420 [INFO][5134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.420 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" iface="eth0" netns="" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.420 [INFO][5134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.420 [INFO][5134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.463 [INFO][5142] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.463 [INFO][5142] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.463 [INFO][5142] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.475 [WARNING][5142] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.477 [INFO][5142] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" HandleID="k8s-pod-network.3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--apiserver--6c564fdf9d--fpdb9-eth0" Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.480 [INFO][5142] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.487375 containerd[1523]: 2026-04-28 02:49:42.484 [INFO][5134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264" Apr 28 02:49:42.487375 containerd[1523]: time="2026-04-28T02:49:42.487316207Z" level=info msg="TearDown network for sandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" successfully" Apr 28 02:49:42.495738 containerd[1523]: time="2026-04-28T02:49:42.495648693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:42.495947 containerd[1523]: time="2026-04-28T02:49:42.495911983Z" level=info msg="RemovePodSandbox \"3008afcd233fd342e294ce500b76161d9e71baa31d773df78fe4ec2738c2d264\" returns successfully" Apr 28 02:49:42.497201 containerd[1523]: time="2026-04-28T02:49:42.497167148Z" level=info msg="StopPodSandbox for \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\"" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.579 [WARNING][5157] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4be42816-d109-44ed-99ff-f1618cbf739e", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5", Pod:"coredns-674b8bbfcf-ndml9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3b5ec31039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.580 [INFO][5157] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.580 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" iface="eth0" netns="" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.580 [INFO][5157] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.580 [INFO][5157] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.619 [INFO][5165] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.619 [INFO][5165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.620 [INFO][5165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.632 [WARNING][5165] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.632 [INFO][5165] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.634 [INFO][5165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.639743 containerd[1523]: 2026-04-28 02:49:42.637 [INFO][5157] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.639743 containerd[1523]: time="2026-04-28T02:49:42.639501612Z" level=info msg="TearDown network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" successfully" Apr 28 02:49:42.639743 containerd[1523]: time="2026-04-28T02:49:42.639538857Z" level=info msg="StopPodSandbox for \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" returns successfully" Apr 28 02:49:42.642656 containerd[1523]: time="2026-04-28T02:49:42.641555271Z" level=info msg="RemovePodSandbox for \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\"" Apr 28 02:49:42.642656 containerd[1523]: time="2026-04-28T02:49:42.641592473Z" level=info msg="Forcibly stopping sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\"" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.717 [WARNING][5180] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4be42816-d109-44ed-99ff-f1618cbf739e", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"8100af5a5a2288d924749ce1286840ffd92c17e8b526bcf35a66c660cf77d4a5", Pod:"coredns-674b8bbfcf-ndml9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3b5ec31039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.718 [INFO][5180] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.718 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" iface="eth0" netns="" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.718 [INFO][5180] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.718 [INFO][5180] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.765 [INFO][5187] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.766 [INFO][5187] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.766 [INFO][5187] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.776 [WARNING][5187] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.776 [INFO][5187] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" HandleID="k8s-pod-network.7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ndml9-eth0" Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.778 [INFO][5187] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.783676 containerd[1523]: 2026-04-28 02:49:42.781 [INFO][5180] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84" Apr 28 02:49:42.783676 containerd[1523]: time="2026-04-28T02:49:42.783325130Z" level=info msg="TearDown network for sandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" successfully" Apr 28 02:49:42.789257 containerd[1523]: time="2026-04-28T02:49:42.789207463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:42.789353 containerd[1523]: time="2026-04-28T02:49:42.789291466Z" level=info msg="RemovePodSandbox \"7b45366da54cd9be6ff5eafa902e73f946fed8e3ff9bde13a36da8071fd67a84\" returns successfully" Apr 28 02:49:42.790113 containerd[1523]: time="2026-04-28T02:49:42.790081514Z" level=info msg="StopPodSandbox for \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\"" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.868 [WARNING][5201] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53cc779b-07b6-4618-82bd-00d7d06d83e0", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49", Pod:"coredns-674b8bbfcf-v8l6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d93a58d64d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.868 [INFO][5201] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.868 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" iface="eth0" netns="" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.868 [INFO][5201] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.868 [INFO][5201] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.918 [INFO][5209] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.919 [INFO][5209] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.919 [INFO][5209] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.930 [WARNING][5209] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.930 [INFO][5209] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.932 [INFO][5209] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:42.937649 containerd[1523]: 2026-04-28 02:49:42.934 [INFO][5201] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:42.937649 containerd[1523]: time="2026-04-28T02:49:42.937585786Z" level=info msg="TearDown network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" successfully" Apr 28 02:49:42.937649 containerd[1523]: time="2026-04-28T02:49:42.937642446Z" level=info msg="StopPodSandbox for \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" returns successfully" Apr 28 02:49:42.939218 containerd[1523]: time="2026-04-28T02:49:42.938299859Z" level=info msg="RemovePodSandbox for \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\"" Apr 28 02:49:42.939218 containerd[1523]: time="2026-04-28T02:49:42.938335268Z" level=info msg="Forcibly stopping sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\"" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.010 [WARNING][5223] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53cc779b-07b6-4618-82bd-00d7d06d83e0", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"65260e3148392b42c82a012bed6c9cb4ab7b1871f0567cdbe598b857c1edcc49", Pod:"coredns-674b8bbfcf-v8l6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d93a58d64d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.011 [INFO][5223] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.011 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" iface="eth0" netns="" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.011 [INFO][5223] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.011 [INFO][5223] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.059 [INFO][5230] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.059 [INFO][5230] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.059 [INFO][5230] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.074 [WARNING][5230] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.074 [INFO][5230] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" HandleID="k8s-pod-network.e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Workload="srv--4dua5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v8l6s-eth0" Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.077 [INFO][5230] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.082927 containerd[1523]: 2026-04-28 02:49:43.079 [INFO][5223] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613" Apr 28 02:49:43.082927 containerd[1523]: time="2026-04-28T02:49:43.082899741Z" level=info msg="TearDown network for sandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" successfully" Apr 28 02:49:43.109646 containerd[1523]: time="2026-04-28T02:49:43.109146792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:43.109646 containerd[1523]: time="2026-04-28T02:49:43.109439474Z" level=info msg="RemovePodSandbox \"e12d94bfe68e830075db7ed50eeabbcfd46b7c689a6116e3ded95a0fbce34613\" returns successfully" Apr 28 02:49:43.111350 containerd[1523]: time="2026-04-28T02:49:43.110855138Z" level=info msg="StopPodSandbox for \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\"" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.177 [WARNING][5244] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b51c0c3e-fb85-4791-a4da-124042c0f74d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665", Pod:"csi-node-driver-r758r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb431195524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.178 [INFO][5244] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.178 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" iface="eth0" netns="" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.178 [INFO][5244] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.178 [INFO][5244] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.227 [INFO][5252] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.228 [INFO][5252] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.228 [INFO][5252] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.242 [WARNING][5252] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.242 [INFO][5252] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.245 [INFO][5252] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.248930 containerd[1523]: 2026-04-28 02:49:43.246 [INFO][5244] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.250139 containerd[1523]: time="2026-04-28T02:49:43.249798221Z" level=info msg="TearDown network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" successfully" Apr 28 02:49:43.250139 containerd[1523]: time="2026-04-28T02:49:43.249866667Z" level=info msg="StopPodSandbox for \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" returns successfully" Apr 28 02:49:43.251010 containerd[1523]: time="2026-04-28T02:49:43.250936512Z" level=info msg="RemovePodSandbox for \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\"" Apr 28 02:49:43.251190 containerd[1523]: time="2026-04-28T02:49:43.251027208Z" level=info msg="Forcibly stopping sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\"" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.323 [WARNING][5266] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b51c0c3e-fb85-4791-a4da-124042c0f74d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665", Pod:"csi-node-driver-r758r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb431195524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.323 [INFO][5266] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.323 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" iface="eth0" netns="" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.323 [INFO][5266] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.323 [INFO][5266] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.368 [INFO][5273] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.368 [INFO][5273] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.369 [INFO][5273] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.380 [WARNING][5273] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.380 [INFO][5273] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" HandleID="k8s-pod-network.7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Workload="srv--4dua5.gb1.brightbox.com-k8s-csi--node--driver--r758r-eth0" Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.382 [INFO][5273] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.386917 containerd[1523]: 2026-04-28 02:49:43.384 [INFO][5266] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527" Apr 28 02:49:43.388263 containerd[1523]: time="2026-04-28T02:49:43.386976989Z" level=info msg="TearDown network for sandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" successfully" Apr 28 02:49:43.391292 containerd[1523]: time="2026-04-28T02:49:43.391256645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:43.422285 containerd[1523]: time="2026-04-28T02:49:43.422221670Z" level=info msg="RemovePodSandbox \"7939a38f0817f9b5f1f5e62f5144fb26ec22289368b447ff360455c659d3b527\" returns successfully" Apr 28 02:49:43.423786 containerd[1523]: time="2026-04-28T02:49:43.423290817Z" level=info msg="StopPodSandbox for \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\"" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.483 [WARNING][5287] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0", GenerateName:"calico-kube-controllers-55d9f7668-", Namespace:"calico-system", SelfLink:"", UID:"5ea8300d-6e47-4707-97ba-70635cc935f5", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d9f7668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471", Pod:"calico-kube-controllers-55d9f7668-qwcld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79a47156ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.484 [INFO][5287] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.484 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" iface="eth0" netns="" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.484 [INFO][5287] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.484 [INFO][5287] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.536 [INFO][5294] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.536 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.536 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.552 [WARNING][5294] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.553 [INFO][5294] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.555 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.560142 containerd[1523]: 2026-04-28 02:49:43.557 [INFO][5287] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.562250 containerd[1523]: time="2026-04-28T02:49:43.560769907Z" level=info msg="TearDown network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" successfully" Apr 28 02:49:43.562250 containerd[1523]: time="2026-04-28T02:49:43.560852623Z" level=info msg="StopPodSandbox for \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" returns successfully" Apr 28 02:49:43.564341 containerd[1523]: time="2026-04-28T02:49:43.563707206Z" level=info msg="RemovePodSandbox for \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\"" Apr 28 02:49:43.564341 containerd[1523]: time="2026-04-28T02:49:43.563783063Z" level=info msg="Forcibly stopping sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\"" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.629 [WARNING][5308] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0", GenerateName:"calico-kube-controllers-55d9f7668-", Namespace:"calico-system", SelfLink:"", UID:"5ea8300d-6e47-4707-97ba-70635cc935f5", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d9f7668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"fb6053cf70792d1065fecb313c8be64d45b5e8c87a6f53c166ba4fcbc806e471", Pod:"calico-kube-controllers-55d9f7668-qwcld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79a47156ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.630 [INFO][5308] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.630 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" iface="eth0" netns="" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.630 [INFO][5308] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.630 [INFO][5308] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.681 [INFO][5315] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.682 [INFO][5315] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.682 [INFO][5315] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.694 [WARNING][5315] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.694 [INFO][5315] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" HandleID="k8s-pod-network.94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Workload="srv--4dua5.gb1.brightbox.com-k8s-calico--kube--controllers--55d9f7668--qwcld-eth0" Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.697 [INFO][5315] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.707907 containerd[1523]: 2026-04-28 02:49:43.704 [INFO][5308] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8" Apr 28 02:49:43.707907 containerd[1523]: time="2026-04-28T02:49:43.707850303Z" level=info msg="TearDown network for sandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" successfully" Apr 28 02:49:43.733026 containerd[1523]: time="2026-04-28T02:49:43.732732507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:43.733026 containerd[1523]: time="2026-04-28T02:49:43.732827410Z" level=info msg="RemovePodSandbox \"94732f57c182e7ecb2a2f95cb3ea9bf030138e51c49d84c5131412582ba92fb8\" returns successfully" Apr 28 02:49:43.733656 containerd[1523]: time="2026-04-28T02:49:43.733605519Z" level=info msg="StopPodSandbox for \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\"" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.825 [WARNING][5333] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"3761dcc7-adab-40a0-94ad-c80888682a66", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58", Pod:"goldmane-57885fdd4c-d9nd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fc17693ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.825 [INFO][5333] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.825 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" iface="eth0" netns="" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.825 [INFO][5333] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.825 [INFO][5333] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.883 [INFO][5341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.883 [INFO][5341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.883 [INFO][5341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.903 [WARNING][5341] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.903 [INFO][5341] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.906 [INFO][5341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:43.912501 containerd[1523]: 2026-04-28 02:49:43.909 [INFO][5333] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:43.913522 containerd[1523]: time="2026-04-28T02:49:43.912472002Z" level=info msg="TearDown network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" successfully" Apr 28 02:49:43.913522 containerd[1523]: time="2026-04-28T02:49:43.913361060Z" level=info msg="StopPodSandbox for \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" returns successfully" Apr 28 02:49:43.914874 containerd[1523]: time="2026-04-28T02:49:43.914395764Z" level=info msg="RemovePodSandbox for \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\"" Apr 28 02:49:43.914874 containerd[1523]: time="2026-04-28T02:49:43.914450692Z" level=info msg="Forcibly stopping sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\"" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.000 [WARNING][5357] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"3761dcc7-adab-40a0-94ad-c80888682a66", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 48, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58", Pod:"goldmane-57885fdd4c-d9nd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fc17693ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.001 [INFO][5357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.001 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" iface="eth0" netns="" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.001 [INFO][5357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.001 [INFO][5357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.051 [INFO][5365] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.051 [INFO][5365] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.051 [INFO][5365] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.063 [WARNING][5365] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.063 [INFO][5365] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" HandleID="k8s-pod-network.24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Workload="srv--4dua5.gb1.brightbox.com-k8s-goldmane--57885fdd4c--d9nd5-eth0" Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.066 [INFO][5365] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:44.074025 containerd[1523]: 2026-04-28 02:49:44.071 [INFO][5357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2" Apr 28 02:49:44.075206 containerd[1523]: time="2026-04-28T02:49:44.075023763Z" level=info msg="TearDown network for sandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" successfully" Apr 28 02:49:44.083879 containerd[1523]: time="2026-04-28T02:49:44.083446194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:49:44.083879 containerd[1523]: time="2026-04-28T02:49:44.083520992Z" level=info msg="RemovePodSandbox \"24f818ac0ca07ad00725a2849f955dd3b159564cb09e5d0a56e6ca247d09f4d2\" returns successfully" Apr 28 02:49:44.106193 containerd[1523]: time="2026-04-28T02:49:44.105917634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:44.107524 containerd[1523]: time="2026-04-28T02:49:44.107465201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 28 02:49:44.108160 containerd[1523]: time="2026-04-28T02:49:44.108127323Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:44.111561 containerd[1523]: time="2026-04-28T02:49:44.111509392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:44.113122 containerd[1523]: time="2026-04-28T02:49:44.113016726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 3.399723589s" Apr 28 02:49:44.113357 containerd[1523]: time="2026-04-28T02:49:44.113082507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 28 02:49:44.116080 containerd[1523]: time="2026-04-28T02:49:44.116035615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 28 02:49:44.129725 containerd[1523]: time="2026-04-28T02:49:44.129578543Z" level=info msg="CreateContainer within sandbox \"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 28 02:49:44.154770 containerd[1523]: time="2026-04-28T02:49:44.154701068Z" level=info msg="CreateContainer within sandbox \"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"45057311ff0b7f8e67b3fa195a1ee42aca6d799f656d047caf64563dcba3835d\"" Apr 28 02:49:44.155416 containerd[1523]: time="2026-04-28T02:49:44.155367820Z" level=info msg="StartContainer for \"45057311ff0b7f8e67b3fa195a1ee42aca6d799f656d047caf64563dcba3835d\"" Apr 28 02:49:44.246478 systemd[1]: Started cri-containerd-45057311ff0b7f8e67b3fa195a1ee42aca6d799f656d047caf64563dcba3835d.scope - libcontainer container 45057311ff0b7f8e67b3fa195a1ee42aca6d799f656d047caf64563dcba3835d. Apr 28 02:49:44.306821 containerd[1523]: time="2026-04-28T02:49:44.306722666Z" level=info msg="StartContainer for \"45057311ff0b7f8e67b3fa195a1ee42aca6d799f656d047caf64563dcba3835d\" returns successfully" Apr 28 02:49:45.849703 containerd[1523]: time="2026-04-28T02:49:45.849584496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:45.851399 containerd[1523]: time="2026-04-28T02:49:45.850268585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 28 02:49:45.853002 containerd[1523]: time="2026-04-28T02:49:45.852880322Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:45.858037 containerd[1523]: time="2026-04-28T02:49:45.858001738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:45.888785 containerd[1523]: time="2026-04-28T02:49:45.886310392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 1.766135037s" Apr 28 02:49:45.888785 containerd[1523]: time="2026-04-28T02:49:45.886402732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 28 02:49:45.892550 containerd[1523]: time="2026-04-28T02:49:45.892503937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 02:49:45.902041 containerd[1523]: time="2026-04-28T02:49:45.901985784Z" level=info msg="CreateContainer within sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 28 02:49:45.938410 containerd[1523]: time="2026-04-28T02:49:45.938358439Z" level=info msg="CreateContainer within sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\"" Apr 28 02:49:45.940828 containerd[1523]: time="2026-04-28T02:49:45.940779948Z" level=info msg="StartContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\"" Apr 28 02:49:46.002035 systemd[1]: Started cri-containerd-77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609.scope - libcontainer container 77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609. Apr 28 02:49:46.088000 containerd[1523]: time="2026-04-28T02:49:46.087906433Z" level=info msg="StartContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" returns successfully" Apr 28 02:49:46.288248 containerd[1523]: time="2026-04-28T02:49:46.287383565Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:46.289811 containerd[1523]: time="2026-04-28T02:49:46.289068323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 28 02:49:46.294116 containerd[1523]: time="2026-04-28T02:49:46.294065771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 401.510042ms" Apr 28 02:49:46.294247 containerd[1523]: time="2026-04-28T02:49:46.294124147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 28 02:49:46.295766 containerd[1523]: time="2026-04-28T02:49:46.295735016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 28 02:49:46.316641 containerd[1523]: time="2026-04-28T02:49:46.315721866Z" level=info msg="CreateContainer within sandbox \"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 02:49:46.372111 containerd[1523]: time="2026-04-28T02:49:46.371888332Z" level=info msg="CreateContainer within sandbox \"fc151293209b605ff5b4f82bf22e75e98466c39e51bcad0e177034f2b9166e4f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d86bffaabd405b64a461fa5846b9ed3b9c1715b1069ab5447cf474dd029fc1a\"" Apr 28 02:49:46.375649 containerd[1523]: time="2026-04-28T02:49:46.372994042Z" level=info msg="StartContainer for \"3d86bffaabd405b64a461fa5846b9ed3b9c1715b1069ab5447cf474dd029fc1a\"" Apr 28 02:49:46.420845 systemd[1]: Started cri-containerd-3d86bffaabd405b64a461fa5846b9ed3b9c1715b1069ab5447cf474dd029fc1a.scope - libcontainer container 3d86bffaabd405b64a461fa5846b9ed3b9c1715b1069ab5447cf474dd029fc1a. Apr 28 02:49:46.495457 containerd[1523]: time="2026-04-28T02:49:46.495409677Z" level=info msg="StartContainer for \"3d86bffaabd405b64a461fa5846b9ed3b9c1715b1069ab5447cf474dd029fc1a\" returns successfully" Apr 28 02:49:46.814216 kubelet[2688]: I0428 02:49:46.814107 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c564fdf9d-86vnc" podStartSLOduration=31.910473479 podStartE2EDuration="47.814048195s" podCreationTimestamp="2026-04-28 02:48:59 +0000 UTC" firstStartedPulling="2026-04-28 02:49:30.391764018 +0000 UTC m=+50.899677959" lastFinishedPulling="2026-04-28 02:49:46.29533873 +0000 UTC m=+66.803252675" observedRunningTime="2026-04-28 02:49:46.813210177 +0000 UTC m=+67.321124137" watchObservedRunningTime="2026-04-28 02:49:46.814048195 +0000 UTC m=+67.321962153" Apr 28 02:49:47.799162 kubelet[2688]: I0428 02:49:47.791543 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:49:50.464138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896836442.mount: Deactivated successfully. Apr 28 02:49:51.668319 containerd[1523]: time="2026-04-28T02:49:51.668199024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:51.671536 containerd[1523]: time="2026-04-28T02:49:51.671427692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 28 02:49:51.678554 containerd[1523]: time="2026-04-28T02:49:51.678471766Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:51.683302 containerd[1523]: time="2026-04-28T02:49:51.682593999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:51.691576 containerd[1523]: time="2026-04-28T02:49:51.691513448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 5.387934441s" Apr 28 02:49:51.691576 containerd[1523]: time="2026-04-28T02:49:51.691581480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 28 02:49:51.727308 containerd[1523]: time="2026-04-28T02:49:51.726844474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 28 02:49:51.728075 containerd[1523]: time="2026-04-28T02:49:51.726885241Z" level=info msg="CreateContainer within sandbox \"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 28 02:49:51.764251 containerd[1523]: time="2026-04-28T02:49:51.764200491Z" level=info msg="CreateContainer within sandbox \"36a25b08f0f0b86beb49d4e233937a58b061df75e64de01f7704383d87546d58\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b\"" Apr 28 02:49:51.767993 containerd[1523]: time="2026-04-28T02:49:51.767937372Z" level=info msg="StartContainer for \"692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b\"" Apr 28 02:49:52.008840 systemd[1]: Started cri-containerd-692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b.scope - libcontainer container 692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b. Apr 28 02:49:52.097383 containerd[1523]: time="2026-04-28T02:49:52.097313493Z" level=info msg="StartContainer for \"692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b\" returns successfully" Apr 28 02:49:52.871011 kubelet[2688]: I0428 02:49:52.870284 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-57885fdd4c-d9nd5" podStartSLOduration=32.704730891 podStartE2EDuration="53.869940673s" podCreationTimestamp="2026-04-28 02:48:59 +0000 UTC" firstStartedPulling="2026-04-28 02:49:30.528115824 +0000 UTC m=+51.036029769" lastFinishedPulling="2026-04-28 02:49:51.693325584 +0000 UTC m=+72.201239551" observedRunningTime="2026-04-28 02:49:52.868353232 +0000 UTC m=+73.376267194" watchObservedRunningTime="2026-04-28 02:49:52.869940673 +0000 UTC m=+73.377854627" Apr 28 02:49:53.881910 systemd[1]: run-containerd-runc-k8s.io-692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b-runc.JO0svU.mount: Deactivated successfully. Apr 28 02:49:54.067739 containerd[1523]: time="2026-04-28T02:49:54.067574413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:54.069971 containerd[1523]: time="2026-04-28T02:49:54.069836881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 28 02:49:54.071034 containerd[1523]: time="2026-04-28T02:49:54.070783473Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:54.076750 containerd[1523]: time="2026-04-28T02:49:54.076705849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:54.080120 containerd[1523]: time="2026-04-28T02:49:54.080075770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 2.353141093s" Apr 28 02:49:54.080277 containerd[1523]: time="2026-04-28T02:49:54.080125764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 28 02:49:54.083303 containerd[1523]: time="2026-04-28T02:49:54.082234129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 28 02:49:54.104955 containerd[1523]: time="2026-04-28T02:49:54.086648444Z" level=info msg="CreateContainer within sandbox \"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 28 02:49:54.153956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079503268.mount: Deactivated successfully. Apr 28 02:49:54.155646 containerd[1523]: time="2026-04-28T02:49:54.154932949Z" level=info msg="CreateContainer within sandbox \"3d9771fc455b58b9dd3f0e8618ba3f9133f6a58c68d2041f18c3ae93a7d24665\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"22fc8fdd06c9c39360cadfc5a51fb7fe5637e593ac5807cfe816d5afa488d0bb\"" Apr 28 02:49:54.157663 containerd[1523]: time="2026-04-28T02:49:54.156910257Z" level=info msg="StartContainer for \"22fc8fdd06c9c39360cadfc5a51fb7fe5637e593ac5807cfe816d5afa488d0bb\"" Apr 28 02:49:54.228951 systemd[1]: Started cri-containerd-22fc8fdd06c9c39360cadfc5a51fb7fe5637e593ac5807cfe816d5afa488d0bb.scope - libcontainer container 22fc8fdd06c9c39360cadfc5a51fb7fe5637e593ac5807cfe816d5afa488d0bb. Apr 28 02:49:54.473187 containerd[1523]: time="2026-04-28T02:49:54.473000701Z" level=info msg="StartContainer for \"22fc8fdd06c9c39360cadfc5a51fb7fe5637e593ac5807cfe816d5afa488d0bb\" returns successfully" Apr 28 02:49:54.631084 systemd[1]: Started sshd@9-10.230.12.190:22-4.175.71.9:54906.service - OpenSSH per-connection server daemon (4.175.71.9:54906). Apr 28 02:49:54.903113 systemd[1]: run-containerd-runc-k8s.io-692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b-runc.1vSelF.mount: Deactivated successfully. Apr 28 02:49:54.938123 kubelet[2688]: I0428 02:49:54.937850 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r758r" podStartSLOduration=30.640910025 podStartE2EDuration="54.937800246s" podCreationTimestamp="2026-04-28 02:49:00 +0000 UTC" firstStartedPulling="2026-04-28 02:49:29.784941521 +0000 UTC m=+50.292855467" lastFinishedPulling="2026-04-28 02:49:54.081831734 +0000 UTC m=+74.589745688" observedRunningTime="2026-04-28 02:49:54.936796392 +0000 UTC m=+75.444710361" watchObservedRunningTime="2026-04-28 02:49:54.937800246 +0000 UTC m=+75.445714199" Apr 28 02:49:54.977135 sshd[5647]: Accepted publickey for core from 4.175.71.9 port 54906 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:49:54.982788 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:49:55.001992 systemd-logind[1492]: New session 12 of user core. Apr 28 02:49:55.012065 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 02:49:55.296469 kubelet[2688]: I0428 02:49:55.294328 2688 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 28 02:49:55.303098 kubelet[2688]: I0428 02:49:55.303062 2688 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 28 02:49:56.097314 sshd[5647]: pam_unix(sshd:session): session closed for user core Apr 28 02:49:56.105985 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Apr 28 02:49:56.107462 systemd[1]: sshd@9-10.230.12.190:22-4.175.71.9:54906.service: Deactivated successfully. Apr 28 02:49:56.112189 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 02:49:56.115555 systemd-logind[1492]: Removed session 12. Apr 28 02:49:57.074315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039938213.mount: Deactivated successfully. Apr 28 02:49:57.094136 containerd[1523]: time="2026-04-28T02:49:57.094039005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:57.096889 containerd[1523]: time="2026-04-28T02:49:57.096759851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 28 02:49:57.098143 containerd[1523]: time="2026-04-28T02:49:57.098079954Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:57.101939 containerd[1523]: time="2026-04-28T02:49:57.101857496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 02:49:57.103585 containerd[1523]: time="2026-04-28T02:49:57.103369632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 3.021088954s" Apr 28 02:49:57.103585 containerd[1523]: time="2026-04-28T02:49:57.103433519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 28 02:49:57.111434 containerd[1523]: time="2026-04-28T02:49:57.111279989Z" level=info msg="CreateContainer within sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 28 02:49:57.207691 containerd[1523]: time="2026-04-28T02:49:57.207136181Z" level=info msg="CreateContainer within sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\"" Apr 28 02:49:57.208993 containerd[1523]: time="2026-04-28T02:49:57.208663856Z" level=info msg="StartContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\"" Apr 28 02:49:57.349947 systemd[1]: Started cri-containerd-4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69.scope - libcontainer container 4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69. Apr 28 02:49:57.427574 containerd[1523]: time="2026-04-28T02:49:57.427522949Z" level=info msg="StartContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" returns successfully" Apr 28 02:49:58.050845 containerd[1523]: time="2026-04-28T02:49:58.050514289Z" level=info msg="StopContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" with timeout 30 (s)" Apr 28 02:49:58.054747 kubelet[2688]: I0428 02:49:58.053603 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c6685bb88-n77bq" podStartSLOduration=26.083223191 podStartE2EDuration="53.053545058s" podCreationTimestamp="2026-04-28 02:49:05 +0000 UTC" firstStartedPulling="2026-04-28 02:49:30.134859382 +0000 UTC m=+50.642773338" lastFinishedPulling="2026-04-28 02:49:57.105181259 +0000 UTC m=+77.613095205" observedRunningTime="2026-04-28 02:49:57.952166712 +0000 UTC m=+78.460080661" watchObservedRunningTime="2026-04-28 02:49:58.053545058 +0000 UTC m=+78.561459053" Apr 28 02:49:58.056312 containerd[1523]: time="2026-04-28T02:49:58.055445167Z" level=info msg="StopContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" with timeout 30 (s)" Apr 28 02:49:58.061537 containerd[1523]: time="2026-04-28T02:49:58.061324653Z" level=info msg="Stop container \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" with signal terminated" Apr 28 02:49:58.062117 containerd[1523]: time="2026-04-28T02:49:58.062074380Z" level=info msg="Stop container \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" with signal terminated" Apr 28 02:49:58.100418 systemd[1]: cri-containerd-4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69.scope: Deactivated successfully. Apr 28 02:49:58.112534 systemd[1]: cri-containerd-77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609.scope: Deactivated successfully. Apr 28 02:49:58.207695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609-rootfs.mount: Deactivated successfully. Apr 28 02:49:58.215708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69-rootfs.mount: Deactivated successfully. Apr 28 02:49:58.232277 containerd[1523]: time="2026-04-28T02:49:58.222358159Z" level=info msg="shim disconnected" id=77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609 namespace=k8s.io Apr 28 02:49:58.233127 containerd[1523]: time="2026-04-28T02:49:58.232294800Z" level=warning msg="cleaning up after shim disconnected" id=77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609 namespace=k8s.io Apr 28 02:49:58.233127 containerd[1523]: time="2026-04-28T02:49:58.232320760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:58.235928 containerd[1523]: time="2026-04-28T02:49:58.235698987Z" level=info msg="shim disconnected" id=4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69 namespace=k8s.io Apr 28 02:49:58.235928 containerd[1523]: time="2026-04-28T02:49:58.235746189Z" level=warning msg="cleaning up after shim disconnected" id=4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69 namespace=k8s.io Apr 28 02:49:58.235928 containerd[1523]: time="2026-04-28T02:49:58.235781938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:58.276600 containerd[1523]: time="2026-04-28T02:49:58.276520671Z" level=warning msg="cleanup warnings time=\"2026-04-28T02:49:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 02:49:58.285003 containerd[1523]: time="2026-04-28T02:49:58.284970256Z" level=info msg="StopContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" returns successfully" Apr 28 02:49:58.289752 containerd[1523]: time="2026-04-28T02:49:58.288487810Z" level=info msg="StopContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" returns successfully" Apr 28 02:49:58.299510 containerd[1523]: time="2026-04-28T02:49:58.299459537Z" level=info msg="StopPodSandbox for \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\"" Apr 28 02:49:58.299811 containerd[1523]: time="2026-04-28T02:49:58.299776144Z" level=info msg="Container to stop \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:49:58.301704 containerd[1523]: time="2026-04-28T02:49:58.299976398Z" level=info msg="Container to stop \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 02:49:58.305897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527-shm.mount: Deactivated successfully. Apr 28 02:49:58.315444 systemd[1]: cri-containerd-9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527.scope: Deactivated successfully. Apr 28 02:49:58.346283 containerd[1523]: time="2026-04-28T02:49:58.346147657Z" level=info msg="shim disconnected" id=9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527 namespace=k8s.io Apr 28 02:49:58.346502 containerd[1523]: time="2026-04-28T02:49:58.346472077Z" level=warning msg="cleaning up after shim disconnected" id=9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527 namespace=k8s.io Apr 28 02:49:58.346699 containerd[1523]: time="2026-04-28T02:49:58.346599951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 02:49:58.354173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527-rootfs.mount: Deactivated successfully. Apr 28 02:49:58.734517 systemd-networkd[1442]: cali5e75f9d16ec: Link DOWN Apr 28 02:49:58.735149 systemd-networkd[1442]: cali5e75f9d16ec: Lost carrier Apr 28 02:49:58.906912 kubelet[2688]: I0428 02:49:58.906857 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.702 [INFO][5848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.704 [INFO][5848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" iface="eth0" netns="/var/run/netns/cni-b2b81d73-74e1-4fc2-195a-eb411dd33ded" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.706 [INFO][5848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" iface="eth0" netns="/var/run/netns/cni-b2b81d73-74e1-4fc2-195a-eb411dd33ded" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.721 [INFO][5848] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" after=16.683399ms iface="eth0" netns="/var/run/netns/cni-b2b81d73-74e1-4fc2-195a-eb411dd33ded" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.721 [INFO][5848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.722 [INFO][5848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.986 [INFO][5855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.989 [INFO][5855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:58.989 [INFO][5855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:59.058 [INFO][5855] ipam/ipam_plugin.go 517: Released address using handleID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:59.058 [INFO][5855] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:59.062 [INFO][5855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:49:59.067531 containerd[1523]: 2026-04-28 02:49:59.064 [INFO][5848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:49:59.068995 containerd[1523]: time="2026-04-28T02:49:59.067761500Z" level=info msg="TearDown network for sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" successfully" Apr 28 02:49:59.068995 containerd[1523]: time="2026-04-28T02:49:59.067801571Z" level=info msg="StopPodSandbox for \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" returns successfully" Apr 28 02:49:59.074043 systemd[1]: run-netns-cni\x2db2b81d73\x2d74e1\x2d4fc2\x2d195a\x2deb411dd33ded.mount: Deactivated successfully. Apr 28 02:49:59.259161 kubelet[2688]: I0428 02:49:59.258584 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-nginx-config\") pod \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " Apr 28 02:49:59.259161 kubelet[2688]: I0428 02:49:59.258741 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-ca-bundle\") pod \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " Apr 28 02:49:59.259161 kubelet[2688]: I0428 02:49:59.258899 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-backend-key-pair\") pod \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " Apr 28 02:49:59.259161 kubelet[2688]: I0428 02:49:59.258964 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgjl2\" (UniqueName: \"kubernetes.io/projected/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-kube-api-access-vgjl2\") pod \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\" (UID: \"8c1c770b-95e1-4efa-8dd8-c75266e36ef1\") " Apr 28 02:49:59.295547 kubelet[2688]: I0428 02:49:59.293218 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "8c1c770b-95e1-4efa-8dd8-c75266e36ef1" (UID: "8c1c770b-95e1-4efa-8dd8-c75266e36ef1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:49:59.310456 kubelet[2688]: I0428 02:49:59.310407 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8c1c770b-95e1-4efa-8dd8-c75266e36ef1" (UID: "8c1c770b-95e1-4efa-8dd8-c75266e36ef1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 02:49:59.354903 kubelet[2688]: I0428 02:49:59.354714 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8c1c770b-95e1-4efa-8dd8-c75266e36ef1" (UID: "8c1c770b-95e1-4efa-8dd8-c75266e36ef1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 02:49:59.355652 systemd[1]: var-lib-kubelet-pods-8c1c770b\x2d95e1\x2d4efa\x2d8dd8\x2dc75266e36ef1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvgjl2.mount: Deactivated successfully. Apr 28 02:49:59.357761 kubelet[2688]: I0428 02:49:59.352308 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-kube-api-access-vgjl2" (OuterVolumeSpecName: "kube-api-access-vgjl2") pod "8c1c770b-95e1-4efa-8dd8-c75266e36ef1" (UID: "8c1c770b-95e1-4efa-8dd8-c75266e36ef1"). InnerVolumeSpecName "kube-api-access-vgjl2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 02:49:59.355882 systemd[1]: var-lib-kubelet-pods-8c1c770b\x2d95e1\x2d4efa\x2d8dd8\x2dc75266e36ef1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 28 02:49:59.366895 kubelet[2688]: I0428 02:49:59.366862 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-backend-key-pair\") on node \"srv-4dua5.gb1.brightbox.com\" DevicePath \"\"" Apr 28 02:49:59.366999 kubelet[2688]: I0428 02:49:59.366906 2688 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgjl2\" (UniqueName: \"kubernetes.io/projected/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-kube-api-access-vgjl2\") on node \"srv-4dua5.gb1.brightbox.com\" DevicePath \"\"" Apr 28 02:49:59.366999 kubelet[2688]: I0428 02:49:59.366929 2688 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-nginx-config\") on node \"srv-4dua5.gb1.brightbox.com\" DevicePath \"\"" Apr 28 02:49:59.366999 kubelet[2688]: I0428 02:49:59.366947 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1c770b-95e1-4efa-8dd8-c75266e36ef1-whisker-ca-bundle\") on node \"srv-4dua5.gb1.brightbox.com\" DevicePath \"\"" Apr 28 02:49:59.846918 systemd[1]: Removed slice kubepods-besteffort-pod8c1c770b_95e1_4efa_8dd8_c75266e36ef1.slice - libcontainer container kubepods-besteffort-pod8c1c770b_95e1_4efa_8dd8_c75266e36ef1.slice. Apr 28 02:50:00.142133 systemd[1]: Created slice kubepods-besteffort-podfad00442_9b06_4ae4_9fe7_512e0bc18384.slice - libcontainer container kubepods-besteffort-podfad00442_9b06_4ae4_9fe7_512e0bc18384.slice. Apr 28 02:50:00.172988 kubelet[2688]: I0428 02:50:00.172876 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fad00442-9b06-4ae4-9fe7-512e0bc18384-whisker-ca-bundle\") pod \"whisker-69684d9498-g9hv6\" (UID: \"fad00442-9b06-4ae4-9fe7-512e0bc18384\") " pod="calico-system/whisker-69684d9498-g9hv6" Apr 28 02:50:00.173597 kubelet[2688]: I0428 02:50:00.173366 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fad00442-9b06-4ae4-9fe7-512e0bc18384-nginx-config\") pod \"whisker-69684d9498-g9hv6\" (UID: \"fad00442-9b06-4ae4-9fe7-512e0bc18384\") " pod="calico-system/whisker-69684d9498-g9hv6" Apr 28 02:50:00.173597 kubelet[2688]: I0428 02:50:00.173439 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fad00442-9b06-4ae4-9fe7-512e0bc18384-whisker-backend-key-pair\") pod \"whisker-69684d9498-g9hv6\" (UID: \"fad00442-9b06-4ae4-9fe7-512e0bc18384\") " pod="calico-system/whisker-69684d9498-g9hv6" Apr 28 02:50:00.173597 kubelet[2688]: I0428 02:50:00.173480 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qdr\" (UniqueName: \"kubernetes.io/projected/fad00442-9b06-4ae4-9fe7-512e0bc18384-kube-api-access-s7qdr\") pod \"whisker-69684d9498-g9hv6\" (UID: \"fad00442-9b06-4ae4-9fe7-512e0bc18384\") " pod="calico-system/whisker-69684d9498-g9hv6" Apr 28 02:50:00.493860 containerd[1523]: time="2026-04-28T02:50:00.493491371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69684d9498-g9hv6,Uid:fad00442-9b06-4ae4-9fe7-512e0bc18384,Namespace:calico-system,Attempt:0,}" Apr 28 02:50:00.819260 systemd-networkd[1442]: cali201148e218c: Link UP Apr 28 02:50:00.821793 systemd-networkd[1442]: cali201148e218c: Gained carrier Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.661 [INFO][5887] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0 whisker-69684d9498- calico-system fad00442-9b06-4ae4-9fe7-512e0bc18384 1184 0 2026-04-28 02:50:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69684d9498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-4dua5.gb1.brightbox.com whisker-69684d9498-g9hv6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali201148e218c [] [] }} ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.661 [INFO][5887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.726 [INFO][5898] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" HandleID="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.738 [INFO][5898] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" HandleID="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3a90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4dua5.gb1.brightbox.com", "pod":"whisker-69684d9498-g9hv6", "timestamp":"2026-04-28 02:50:00.726953032 +0000 UTC"}, Hostname:"srv-4dua5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004131e0)} Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.740 [INFO][5898] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.740 [INFO][5898] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.740 [INFO][5898] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4dua5.gb1.brightbox.com' Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.747 [INFO][5898] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.756 [INFO][5898] ipam/ipam.go 409: Looking up existing affinities for host host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.764 [INFO][5898] ipam/ipam.go 526: Trying affinity for 192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.769 [INFO][5898] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.773 [INFO][5898] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.774 [INFO][5898] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.784 [INFO][5898] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.791 [INFO][5898] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.800 [INFO][5898] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.73/26] block=192.168.14.64/26 handle="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.801 [INFO][5898] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.73/26] handle="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" host="srv-4dua5.gb1.brightbox.com" Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.801 [INFO][5898] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:50:00.864805 containerd[1523]: 2026-04-28 02:50:00.801 [INFO][5898] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.73/26] IPv6=[] ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" HandleID="k8s-pod-network.e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.806 [INFO][5887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0", GenerateName:"whisker-69684d9498-", Namespace:"calico-system", SelfLink:"", UID:"fad00442-9b06-4ae4-9fe7-512e0bc18384", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 50, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69684d9498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"", Pod:"whisker-69684d9498-g9hv6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali201148e218c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.807 [INFO][5887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.73/32] ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.807 [INFO][5887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali201148e218c ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.824 [INFO][5887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.825 [INFO][5887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0", GenerateName:"whisker-69684d9498-", Namespace:"calico-system", SelfLink:"", UID:"fad00442-9b06-4ae4-9fe7-512e0bc18384", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 2, 50, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69684d9498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4dua5.gb1.brightbox.com", ContainerID:"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af", Pod:"whisker-69684d9498-g9hv6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali201148e218c", MAC:"4e:85:77:9a:3f:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 02:50:00.869534 containerd[1523]: 2026-04-28 02:50:00.850 [INFO][5887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af" Namespace="calico-system" Pod="whisker-69684d9498-g9hv6" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--69684d9498--g9hv6-eth0" Apr 28 02:50:00.964123 containerd[1523]: time="2026-04-28T02:50:00.963489837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 02:50:00.964123 containerd[1523]: time="2026-04-28T02:50:00.963654782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 02:50:00.964123 containerd[1523]: time="2026-04-28T02:50:00.963699708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:50:00.964123 containerd[1523]: time="2026-04-28T02:50:00.963888203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 02:50:01.002854 systemd[1]: Started cri-containerd-e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af.scope - libcontainer container e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af. Apr 28 02:50:01.120822 containerd[1523]: time="2026-04-28T02:50:01.120691176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69684d9498-g9hv6,Uid:fad00442-9b06-4ae4-9fe7-512e0bc18384,Namespace:calico-system,Attempt:0,} returns sandbox id \"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af\"" Apr 28 02:50:01.135945 systemd[1]: Started sshd@10-10.230.12.190:22-4.175.71.9:54976.service - OpenSSH per-connection server daemon (4.175.71.9:54976). Apr 28 02:50:01.185174 containerd[1523]: time="2026-04-28T02:50:01.185110816Z" level=info msg="CreateContainer within sandbox \"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 28 02:50:01.228383 containerd[1523]: time="2026-04-28T02:50:01.228141353Z" level=info msg="CreateContainer within sandbox \"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"99df0e47bb3391ac89b8f1389f45a2c8e92ff1ff3c908e43ce95e630ffdb38b2\"" Apr 28 02:50:01.229844 containerd[1523]: time="2026-04-28T02:50:01.229218020Z" level=info msg="StartContainer for \"99df0e47bb3391ac89b8f1389f45a2c8e92ff1ff3c908e43ce95e630ffdb38b2\"" Apr 28 02:50:01.308943 systemd[1]: Started cri-containerd-99df0e47bb3391ac89b8f1389f45a2c8e92ff1ff3c908e43ce95e630ffdb38b2.scope - libcontainer container 99df0e47bb3391ac89b8f1389f45a2c8e92ff1ff3c908e43ce95e630ffdb38b2. Apr 28 02:50:01.383097 sshd[5957]: Accepted publickey for core from 4.175.71.9 port 54976 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:01.389363 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:01.406538 systemd-logind[1492]: New session 13 of user core. Apr 28 02:50:01.412874 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 02:50:01.506547 containerd[1523]: time="2026-04-28T02:50:01.506413392Z" level=info msg="StartContainer for \"99df0e47bb3391ac89b8f1389f45a2c8e92ff1ff3c908e43ce95e630ffdb38b2\" returns successfully" Apr 28 02:50:01.531367 containerd[1523]: time="2026-04-28T02:50:01.530957901Z" level=info msg="CreateContainer within sandbox \"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 28 02:50:01.573629 containerd[1523]: time="2026-04-28T02:50:01.573338927Z" level=info msg="CreateContainer within sandbox \"e33878a533762e496693347c5e93e448507b8c80d9d2a938a76cd738deec72af\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"16c96a632a992c073352721071a1df9c0077ac5a7f2a78fdf82942b19236c198\"" Apr 28 02:50:01.578930 containerd[1523]: time="2026-04-28T02:50:01.577156718Z" level=info msg="StartContainer for \"16c96a632a992c073352721071a1df9c0077ac5a7f2a78fdf82942b19236c198\"" Apr 28 02:50:01.707300 systemd[1]: Started cri-containerd-16c96a632a992c073352721071a1df9c0077ac5a7f2a78fdf82942b19236c198.scope - libcontainer container 16c96a632a992c073352721071a1df9c0077ac5a7f2a78fdf82942b19236c198. Apr 28 02:50:01.807102 kubelet[2688]: I0428 02:50:01.806709 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1c770b-95e1-4efa-8dd8-c75266e36ef1" path="/var/lib/kubelet/pods/8c1c770b-95e1-4efa-8dd8-c75266e36ef1/volumes" Apr 28 02:50:01.928882 systemd-networkd[1442]: cali201148e218c: Gained IPv6LL Apr 28 02:50:01.985686 containerd[1523]: time="2026-04-28T02:50:01.985535182Z" level=info msg="StartContainer for \"16c96a632a992c073352721071a1df9c0077ac5a7f2a78fdf82942b19236c198\" returns successfully" Apr 28 02:50:02.399289 sshd[5957]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:02.408485 systemd[1]: sshd@10-10.230.12.190:22-4.175.71.9:54976.service: Deactivated successfully. Apr 28 02:50:02.412423 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 02:50:02.414053 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Apr 28 02:50:02.415955 systemd-logind[1492]: Removed session 13. Apr 28 02:50:03.008029 kubelet[2688]: I0428 02:50:03.005166 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69684d9498-g9hv6" podStartSLOduration=3.001549909 podStartE2EDuration="3.001549909s" podCreationTimestamp="2026-04-28 02:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 02:50:02.999478909 +0000 UTC m=+83.507392873" watchObservedRunningTime="2026-04-28 02:50:03.001549909 +0000 UTC m=+83.509463889" Apr 28 02:50:07.443754 systemd[1]: Started sshd@11-10.230.12.190:22-4.175.71.9:42552.service - OpenSSH per-connection server daemon (4.175.71.9:42552). Apr 28 02:50:07.665438 sshd[6051]: Accepted publickey for core from 4.175.71.9 port 42552 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:07.667657 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:07.677121 systemd-logind[1492]: New session 14 of user core. Apr 28 02:50:07.685952 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 02:50:08.186962 sshd[6051]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:08.192396 systemd[1]: sshd@11-10.230.12.190:22-4.175.71.9:42552.service: Deactivated successfully. Apr 28 02:50:08.195495 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 02:50:08.197544 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Apr 28 02:50:08.198856 systemd-logind[1492]: Removed session 14. Apr 28 02:50:12.338231 update_engine[1494]: I20260428 02:50:12.338085 1494 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 02:50:12.338231 update_engine[1494]: I20260428 02:50:12.338217 1494 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 02:50:12.341305 update_engine[1494]: I20260428 02:50:12.341250 1494 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 02:50:12.342528 update_engine[1494]: I20260428 02:50:12.342483 1494 omaha_request_params.cc:62] Current group set to lts Apr 28 02:50:12.348414 update_engine[1494]: I20260428 02:50:12.348212 1494 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 02:50:12.348414 update_engine[1494]: I20260428 02:50:12.348260 1494 update_attempter.cc:643] Scheduling an action processor start. Apr 28 02:50:12.348414 update_engine[1494]: I20260428 02:50:12.348294 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 02:50:12.348762 update_engine[1494]: I20260428 02:50:12.348408 1494 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 02:50:12.348962 update_engine[1494]: I20260428 02:50:12.348919 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 02:50:12.348962 update_engine[1494]: I20260428 02:50:12.348951 1494 omaha_request_action.cc:272] Request: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.348962 update_engine[1494]: Apr 28 02:50:12.350503 update_engine[1494]: I20260428 02:50:12.348968 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 02:50:12.363680 update_engine[1494]: I20260428 02:50:12.360836 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 02:50:12.363680 update_engine[1494]: I20260428 02:50:12.361281 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 02:50:12.369811 update_engine[1494]: E20260428 02:50:12.369633 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 02:50:12.369811 update_engine[1494]: I20260428 02:50:12.369762 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 02:50:12.385227 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 02:50:12.764318 kubelet[2688]: I0428 02:50:12.764251 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:50:13.224978 systemd[1]: Started sshd@12-10.230.12.190:22-4.175.71.9:42564.service - OpenSSH per-connection server daemon (4.175.71.9:42564). Apr 28 02:50:13.404323 sshd[6089]: Accepted publickey for core from 4.175.71.9 port 42564 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:13.407267 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:13.415838 systemd-logind[1492]: New session 15 of user core. Apr 28 02:50:13.423850 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 02:50:13.780965 sshd[6089]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:13.785550 systemd[1]: sshd@12-10.230.12.190:22-4.175.71.9:42564.service: Deactivated successfully. Apr 28 02:50:13.788987 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 02:50:13.791962 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Apr 28 02:50:13.795173 systemd-logind[1492]: Removed session 15. Apr 28 02:50:18.820983 systemd[1]: Started sshd@13-10.230.12.190:22-4.175.71.9:57174.service - OpenSSH per-connection server daemon (4.175.71.9:57174). Apr 28 02:50:18.998533 sshd[6110]: Accepted publickey for core from 4.175.71.9 port 57174 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:19.001099 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:19.009518 systemd-logind[1492]: New session 16 of user core. Apr 28 02:50:19.021869 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 02:50:19.392427 sshd[6110]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:19.402983 systemd[1]: sshd@13-10.230.12.190:22-4.175.71.9:57174.service: Deactivated successfully. Apr 28 02:50:19.405948 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 02:50:19.407471 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Apr 28 02:50:19.409258 systemd-logind[1492]: Removed session 16. Apr 28 02:50:19.635962 systemd[1]: run-containerd-runc-k8s.io-e23c49ba91a2c2c8970a921f12543e2a1187a93f6c5450592aabc9695b9fc316-runc.eJ5p0M.mount: Deactivated successfully. Apr 28 02:50:20.256171 kubelet[2688]: I0428 02:50:20.255836 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 02:50:22.251013 update_engine[1494]: I20260428 02:50:22.250775 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 02:50:22.256344 update_engine[1494]: I20260428 02:50:22.255090 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 02:50:22.258844 update_engine[1494]: I20260428 02:50:22.258805 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 02:50:22.259358 update_engine[1494]: E20260428 02:50:22.259311 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 02:50:22.259444 update_engine[1494]: I20260428 02:50:22.259418 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 02:50:24.427058 systemd[1]: Started sshd@14-10.230.12.190:22-4.175.71.9:57176.service - OpenSSH per-connection server daemon (4.175.71.9:57176). Apr 28 02:50:24.673778 sshd[6154]: Accepted publickey for core from 4.175.71.9 port 57176 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:24.677256 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:24.687724 systemd-logind[1492]: New session 17 of user core. Apr 28 02:50:24.694931 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 02:50:25.545601 sshd[6154]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:25.557383 systemd[1]: sshd@14-10.230.12.190:22-4.175.71.9:57176.service: Deactivated successfully. Apr 28 02:50:25.562883 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 02:50:25.565206 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Apr 28 02:50:25.581984 systemd[1]: Started sshd@15-10.230.12.190:22-4.175.71.9:58868.service - OpenSSH per-connection server daemon (4.175.71.9:58868). Apr 28 02:50:25.583329 systemd-logind[1492]: Removed session 17. Apr 28 02:50:25.803892 sshd[6198]: Accepted publickey for core from 4.175.71.9 port 58868 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:25.806427 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:25.815824 systemd-logind[1492]: New session 18 of user core. Apr 28 02:50:25.822864 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 02:50:26.190178 sshd[6198]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:26.199054 systemd[1]: sshd@15-10.230.12.190:22-4.175.71.9:58868.service: Deactivated successfully. Apr 28 02:50:26.206543 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 02:50:26.211070 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Apr 28 02:50:26.232382 systemd[1]: Started sshd@16-10.230.12.190:22-4.175.71.9:58878.service - OpenSSH per-connection server daemon (4.175.71.9:58878). Apr 28 02:50:26.237085 systemd-logind[1492]: Removed session 18. Apr 28 02:50:26.412462 sshd[6208]: Accepted publickey for core from 4.175.71.9 port 58878 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:26.415219 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:26.422338 systemd-logind[1492]: New session 19 of user core. Apr 28 02:50:26.434006 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 02:50:26.674962 sshd[6208]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:26.680831 systemd[1]: sshd@16-10.230.12.190:22-4.175.71.9:58878.service: Deactivated successfully. Apr 28 02:50:26.684286 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 02:50:26.686000 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Apr 28 02:50:26.687555 systemd-logind[1492]: Removed session 19. Apr 28 02:50:28.772353 systemd[1]: run-containerd-runc-k8s.io-692b5677c433f306a4349bc3ffa81f0f2fe97d9c2f57ebc5b9bee85294becc0b-runc.zzLa3T.mount: Deactivated successfully. Apr 28 02:50:31.707056 systemd[1]: Started sshd@17-10.230.12.190:22-4.175.71.9:58886.service - OpenSSH per-connection server daemon (4.175.71.9:58886). Apr 28 02:50:31.918591 sshd[6264]: Accepted publickey for core from 4.175.71.9 port 58886 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:31.921399 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:31.930333 systemd-logind[1492]: New session 20 of user core. Apr 28 02:50:31.935867 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 02:50:32.249560 update_engine[1494]: I20260428 02:50:32.249372 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 02:50:32.250733 update_engine[1494]: I20260428 02:50:32.250573 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 02:50:32.253001 update_engine[1494]: I20260428 02:50:32.252800 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 02:50:32.253483 update_engine[1494]: E20260428 02:50:32.253343 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 02:50:32.253483 update_engine[1494]: I20260428 02:50:32.253439 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 02:50:32.394955 sshd[6264]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:32.400790 systemd[1]: sshd@17-10.230.12.190:22-4.175.71.9:58886.service: Deactivated successfully. Apr 28 02:50:32.404349 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 02:50:32.406482 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Apr 28 02:50:32.408827 systemd-logind[1492]: Removed session 20. Apr 28 02:50:37.432025 systemd[1]: Started sshd@18-10.230.12.190:22-4.175.71.9:42094.service - OpenSSH per-connection server daemon (4.175.71.9:42094). Apr 28 02:50:37.576640 sshd[6277]: Accepted publickey for core from 4.175.71.9 port 42094 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:37.581004 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:37.588606 systemd-logind[1492]: New session 21 of user core. Apr 28 02:50:37.601864 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 02:50:37.905442 sshd[6277]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:37.914466 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Apr 28 02:50:37.915013 systemd[1]: sshd@18-10.230.12.190:22-4.175.71.9:42094.service: Deactivated successfully. Apr 28 02:50:37.917897 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 02:50:37.919248 systemd-logind[1492]: Removed session 21. Apr 28 02:50:37.932984 systemd[1]: Started sshd@19-10.230.12.190:22-4.175.71.9:42098.service - OpenSSH per-connection server daemon (4.175.71.9:42098). Apr 28 02:50:38.091068 sshd[6290]: Accepted publickey for core from 4.175.71.9 port 42098 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:38.094128 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:38.101403 systemd-logind[1492]: New session 22 of user core. Apr 28 02:50:38.108886 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 02:50:38.716287 sshd[6290]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:38.726289 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Apr 28 02:50:38.727295 systemd[1]: sshd@19-10.230.12.190:22-4.175.71.9:42098.service: Deactivated successfully. Apr 28 02:50:38.731717 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 02:50:38.749061 systemd[1]: Started sshd@20-10.230.12.190:22-4.175.71.9:42114.service - OpenSSH per-connection server daemon (4.175.71.9:42114). Apr 28 02:50:38.750953 systemd-logind[1492]: Removed session 22. Apr 28 02:50:38.930568 sshd[6301]: Accepted publickey for core from 4.175.71.9 port 42114 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:38.933469 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:38.941139 systemd-logind[1492]: New session 23 of user core. Apr 28 02:50:38.944928 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 02:50:40.045684 sshd[6301]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:40.070235 systemd[1]: sshd@20-10.230.12.190:22-4.175.71.9:42114.service: Deactivated successfully. Apr 28 02:50:40.073392 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 02:50:40.076586 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Apr 28 02:50:40.086122 systemd[1]: Started sshd@21-10.230.12.190:22-4.175.71.9:42116.service - OpenSSH per-connection server daemon (4.175.71.9:42116). Apr 28 02:50:40.091533 systemd-logind[1492]: Removed session 23. Apr 28 02:50:40.267304 sshd[6322]: Accepted publickey for core from 4.175.71.9 port 42116 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:40.270153 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:40.279063 systemd-logind[1492]: New session 24 of user core. Apr 28 02:50:40.284886 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 02:50:41.309137 sshd[6322]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:41.316345 systemd[1]: sshd@21-10.230.12.190:22-4.175.71.9:42116.service: Deactivated successfully. Apr 28 02:50:41.320006 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 02:50:41.322390 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Apr 28 02:50:41.338966 systemd[1]: Started sshd@22-10.230.12.190:22-4.175.71.9:42118.service - OpenSSH per-connection server daemon (4.175.71.9:42118). Apr 28 02:50:41.341832 systemd-logind[1492]: Removed session 24. Apr 28 02:50:41.515022 sshd[6336]: Accepted publickey for core from 4.175.71.9 port 42118 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:41.518277 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:41.529200 systemd-logind[1492]: New session 25 of user core. Apr 28 02:50:41.536551 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 02:50:41.837999 sshd[6336]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:41.844883 systemd[1]: sshd@22-10.230.12.190:22-4.175.71.9:42118.service: Deactivated successfully. Apr 28 02:50:41.848828 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 02:50:41.850902 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Apr 28 02:50:41.853145 systemd-logind[1492]: Removed session 25. Apr 28 02:50:42.250182 update_engine[1494]: I20260428 02:50:42.250006 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 02:50:42.251048 update_engine[1494]: I20260428 02:50:42.250830 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 02:50:42.251408 update_engine[1494]: I20260428 02:50:42.251368 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 02:50:42.251973 update_engine[1494]: E20260428 02:50:42.251766 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 02:50:42.251973 update_engine[1494]: I20260428 02:50:42.251847 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 02:50:42.251973 update_engine[1494]: I20260428 02:50:42.251876 1494 omaha_request_action.cc:617] Omaha request response: Apr 28 02:50:42.252277 update_engine[1494]: E20260428 02:50:42.252234 1494 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 02:50:42.268103 update_engine[1494]: I20260428 02:50:42.267843 1494 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 02:50:42.268103 update_engine[1494]: I20260428 02:50:42.267891 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 02:50:42.268103 update_engine[1494]: I20260428 02:50:42.267915 1494 update_attempter.cc:306] Processing Done. Apr 28 02:50:42.268103 update_engine[1494]: E20260428 02:50:42.268013 1494 update_attempter.cc:619] Update failed. Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271297 1494 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271332 1494 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271345 1494 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271517 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271601 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271636 1494 omaha_request_action.cc:272] Request: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271651 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.271999 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 02:50:42.272474 update_engine[1494]: I20260428 02:50:42.272287 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 02:50:42.274122 update_engine[1494]: E20260428 02:50:42.273640 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.273993 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.274014 1494 omaha_request_action.cc:617] Omaha request response: Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.274029 1494 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.274039 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.274049 1494 update_attempter.cc:306] Processing Done. Apr 28 02:50:42.274122 update_engine[1494]: I20260428 02:50:42.274062 1494 update_attempter.cc:310] Error event sent. Apr 28 02:50:42.280462 update_engine[1494]: I20260428 02:50:42.280306 1494 update_check_scheduler.cc:74] Next update check in 40m15s Apr 28 02:50:42.295504 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 02:50:42.295504 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 02:50:44.146912 kubelet[2688]: I0428 02:50:44.146675 2688 scope.go:117] "RemoveContainer" containerID="4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69" Apr 28 02:50:44.387671 containerd[1523]: time="2026-04-28T02:50:44.381085073Z" level=info msg="RemoveContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\"" Apr 28 02:50:44.441564 containerd[1523]: time="2026-04-28T02:50:44.441334130Z" level=info msg="RemoveContainer for \"4b829828c79d4387cdffe809dec1cee5d02f5a6f5d292d020dd2a0140cdd8f69\" returns successfully" Apr 28 02:50:44.456031 kubelet[2688]: I0428 02:50:44.455821 2688 scope.go:117] "RemoveContainer" containerID="77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609" Apr 28 02:50:44.457769 containerd[1523]: time="2026-04-28T02:50:44.457735197Z" level=info msg="RemoveContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\"" Apr 28 02:50:44.462240 containerd[1523]: time="2026-04-28T02:50:44.462207072Z" level=info msg="RemoveContainer for \"77a2b73bc739acde1d6b49b71ab7d56ba882688c2b3491802636b69503d36609\" returns successfully" Apr 28 02:50:44.467742 containerd[1523]: time="2026-04-28T02:50:44.467608602Z" level=info msg="StopPodSandbox for \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\"" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:44.776 [WARNING][6375] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:44.779 [INFO][6375] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:44.779 [INFO][6375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" iface="eth0" netns="" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:44.779 [INFO][6375] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:44.779 [INFO][6375] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.028 [INFO][6382] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.030 [INFO][6382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.031 [INFO][6382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.044 [WARNING][6382] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.044 [INFO][6382] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.046 [INFO][6382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:50:45.052569 containerd[1523]: 2026-04-28 02:50:45.049 [INFO][6375] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.058401 containerd[1523]: time="2026-04-28T02:50:45.058342473Z" level=info msg="TearDown network for sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" successfully" Apr 28 02:50:45.058909 containerd[1523]: time="2026-04-28T02:50:45.058408177Z" level=info msg="StopPodSandbox for \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" returns successfully" Apr 28 02:50:45.093809 containerd[1523]: time="2026-04-28T02:50:45.093541779Z" level=info msg="RemovePodSandbox for \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\"" Apr 28 02:50:45.124266 containerd[1523]: time="2026-04-28T02:50:45.123970261Z" level=info msg="Forcibly stopping sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\"" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.258 [WARNING][6396] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" WorkloadEndpoint="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.258 [INFO][6396] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.259 [INFO][6396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" iface="eth0" netns="" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.259 [INFO][6396] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.259 [INFO][6396] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.339 [INFO][6404] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.340 [INFO][6404] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.341 [INFO][6404] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.359 [WARNING][6404] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.359 [INFO][6404] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" HandleID="k8s-pod-network.9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Workload="srv--4dua5.gb1.brightbox.com-k8s-whisker--5c6685bb88--n77bq-eth0" Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.362 [INFO][6404] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 02:50:45.374921 containerd[1523]: 2026-04-28 02:50:45.369 [INFO][6396] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527" Apr 28 02:50:45.380885 containerd[1523]: time="2026-04-28T02:50:45.375169280Z" level=info msg="TearDown network for sandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" successfully" Apr 28 02:50:45.450757 containerd[1523]: time="2026-04-28T02:50:45.450689901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 02:50:45.451863 containerd[1523]: time="2026-04-28T02:50:45.451700104Z" level=info msg="RemovePodSandbox \"9b29cb6eeded6a99f4e0d33a2d1a441518de3aca4dcdba28e9aea8e15d9ba527\" returns successfully" Apr 28 02:50:46.888422 systemd[1]: Started sshd@23-10.230.12.190:22-4.175.71.9:38296.service - OpenSSH per-connection server daemon (4.175.71.9:38296). Apr 28 02:50:47.087962 sshd[6414]: Accepted publickey for core from 4.175.71.9 port 38296 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:47.091675 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:47.101985 systemd-logind[1492]: New session 26 of user core. Apr 28 02:50:47.106877 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 02:50:47.912703 sshd[6414]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:47.929141 systemd[1]: sshd@23-10.230.12.190:22-4.175.71.9:38296.service: Deactivated successfully. Apr 28 02:50:47.932415 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 02:50:47.933826 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Apr 28 02:50:47.936004 systemd-logind[1492]: Removed session 26. Apr 28 02:50:52.947081 systemd[1]: Started sshd@24-10.230.12.190:22-4.175.71.9:38308.service - OpenSSH per-connection server daemon (4.175.71.9:38308). Apr 28 02:50:53.101485 sshd[6429]: Accepted publickey for core from 4.175.71.9 port 38308 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:53.104326 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:53.113720 systemd-logind[1492]: New session 27 of user core. Apr 28 02:50:53.122936 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 02:50:53.515492 sshd[6429]: pam_unix(sshd:session): session closed for user core Apr 28 02:50:53.520937 systemd[1]: sshd@24-10.230.12.190:22-4.175.71.9:38308.service: Deactivated successfully. Apr 28 02:50:53.524468 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 02:50:53.526314 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. Apr 28 02:50:53.529579 systemd-logind[1492]: Removed session 27. Apr 28 02:50:58.562360 systemd[1]: Started sshd@25-10.230.12.190:22-4.175.71.9:59698.service - OpenSSH per-connection server daemon (4.175.71.9:59698). Apr 28 02:50:58.875022 sshd[6489]: Accepted publickey for core from 4.175.71.9 port 59698 ssh2: RSA SHA256:iiLz+lc7mxPEbTttvp0f7ODVA4uvvQ8xummxfIoHFNU Apr 28 02:50:58.878031 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 02:50:58.912518 systemd-logind[1492]: New session 28 of user core. Apr 28 02:50:58.919922 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 28 02:51:00.245360 sshd[6489]: pam_unix(sshd:session): session closed for user core Apr 28 02:51:00.256270 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. Apr 28 02:51:00.257366 systemd[1]: sshd@25-10.230.12.190:22-4.175.71.9:59698.service: Deactivated successfully. Apr 28 02:51:00.262544 systemd[1]: session-28.scope: Deactivated successfully. Apr 28 02:51:00.264968 systemd-logind[1492]: Removed session 28.