Mar 12 05:10:59.022376 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 05:10:59.022412 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:10:59.022425 kernel: BIOS-provided physical RAM map: Mar 12 05:10:59.022439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 05:10:59.022448 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 05:10:59.022458 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 05:10:59.022468 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 12 05:10:59.022478 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 12 05:10:59.022487 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 05:10:59.022496 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 05:10:59.023553 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 05:10:59.023567 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 05:10:59.023585 kernel: NX (Execute Disable) protection: active Mar 12 05:10:59.023595 kernel: APIC: Static calls initialized Mar 12 05:10:59.023607 kernel: SMBIOS 2.8 present. Mar 12 05:10:59.023618 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 12 05:10:59.023629 kernel: Hypervisor detected: KVM Mar 12 05:10:59.023644 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 05:10:59.023655 kernel: kvm-clock: using sched offset of 4417935595 cycles Mar 12 05:10:59.023667 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 05:10:59.023678 kernel: tsc: Detected 2799.998 MHz processor Mar 12 05:10:59.023689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 05:10:59.023700 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 05:10:59.023711 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 12 05:10:59.023722 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 05:10:59.023733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 05:10:59.023748 kernel: Using GB pages for direct mapping Mar 12 05:10:59.023759 kernel: ACPI: Early table checksum verification disabled Mar 12 05:10:59.023770 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 12 05:10:59.023781 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023792 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023803 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023813 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 12 05:10:59.023824 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023835 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023850 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023861 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:10:59.023872 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 12 05:10:59.023882 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 12 05:10:59.023893 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 12 05:10:59.023910 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 12 05:10:59.023922 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 12 05:10:59.023937 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 12 05:10:59.023949 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 12 05:10:59.023960 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 12 05:10:59.023971 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 12 05:10:59.023983 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 12 05:10:59.023994 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 12 05:10:59.024005 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 12 05:10:59.024016 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 12 05:10:59.024044 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 12 05:10:59.024055 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 12 05:10:59.024065 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 12 05:10:59.024077 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 12 05:10:59.024088 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 12 05:10:59.024098 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 12 05:10:59.024121 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 12 05:10:59.024132 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 12 05:10:59.024144 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 12 05:10:59.024159 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 12 05:10:59.024171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 12 05:10:59.024182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 12 05:10:59.024193 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 12 05:10:59.024205 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 12 05:10:59.024216 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 12 05:10:59.024228 kernel: Zone ranges: Mar 12 05:10:59.024239 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 05:10:59.024250 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 12 05:10:59.024262 kernel: Normal empty Mar 12 05:10:59.024278 kernel: Movable zone start for each node Mar 12 05:10:59.024289 kernel: Early memory node ranges Mar 12 05:10:59.024300 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 05:10:59.024311 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 12 05:10:59.024323 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 12 05:10:59.024334 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 05:10:59.024345 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 05:10:59.024357 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 12 05:10:59.024368 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 05:10:59.024384 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 05:10:59.024395 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 05:10:59.024406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 05:10:59.024418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 05:10:59.024429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 05:10:59.024440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 05:10:59.024452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 05:10:59.024463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 05:10:59.024474 kernel: TSC deadline timer available Mar 12 05:10:59.024489 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 12 05:10:59.024501 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 05:10:59.024512 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 05:10:59.026730 kernel: Booting paravirtualized kernel on KVM Mar 12 05:10:59.026750 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 05:10:59.026762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 12 05:10:59.026773 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 12 05:10:59.026785 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 12 05:10:59.026796 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 12 05:10:59.026814 kernel: kvm-guest: PV spinlocks enabled Mar 12 05:10:59.026826 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 05:10:59.026839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:10:59.026851 kernel: random: crng init done Mar 12 05:10:59.026862 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 05:10:59.026874 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 12 05:10:59.026885 kernel: Fallback order for Node 0: 0 Mar 12 05:10:59.026896 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 12 05:10:59.026912 kernel: Policy zone: DMA32 Mar 12 05:10:59.026924 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 05:10:59.026935 kernel: software IO TLB: area num 16. Mar 12 05:10:59.026947 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194764K reserved, 0K cma-reserved) Mar 12 05:10:59.026958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 12 05:10:59.026969 kernel: Kernel/User page tables isolation: enabled Mar 12 05:10:59.026981 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 05:10:59.026992 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 05:10:59.027003 kernel: Dynamic Preempt: voluntary Mar 12 05:10:59.027019 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 05:10:59.027032 kernel: rcu: RCU event tracing is enabled. Mar 12 05:10:59.027043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 12 05:10:59.027055 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 05:10:59.027067 kernel: Rude variant of Tasks RCU enabled. Mar 12 05:10:59.027088 kernel: Tracing variant of Tasks RCU enabled. Mar 12 05:10:59.027104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 05:10:59.027117 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 12 05:10:59.027128 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 12 05:10:59.027140 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 05:10:59.027152 kernel: Console: colour VGA+ 80x25 Mar 12 05:10:59.027164 kernel: printk: console [tty0] enabled Mar 12 05:10:59.027180 kernel: printk: console [ttyS0] enabled Mar 12 05:10:59.027192 kernel: ACPI: Core revision 20230628 Mar 12 05:10:59.027204 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 05:10:59.027216 kernel: x2apic enabled Mar 12 05:10:59.027228 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 05:10:59.027244 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 12 05:10:59.027256 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Mar 12 05:10:59.027268 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 05:10:59.027280 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 12 05:10:59.027292 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 12 05:10:59.027303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 05:10:59.027315 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 05:10:59.027327 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 05:10:59.027339 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 12 05:10:59.027351 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 12 05:10:59.027367 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 12 05:10:59.027379 kernel: MDS: Mitigation: Clear CPU buffers Mar 12 05:10:59.027391 kernel: MMIO Stale Data: Unknown: No mitigations Mar 12 05:10:59.027402 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 12 05:10:59.027414 kernel: active return thunk: its_return_thunk Mar 12 05:10:59.027426 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 12 05:10:59.027438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 05:10:59.027450 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 05:10:59.027462 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 05:10:59.027590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 05:10:59.027605 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 12 05:10:59.027623 kernel: Freeing SMP alternatives memory: 32K Mar 12 05:10:59.027636 kernel: pid_max: default: 32768 minimum: 301 Mar 12 05:10:59.027648 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 05:10:59.027659 kernel: landlock: Up and running. Mar 12 05:10:59.027671 kernel: SELinux: Initializing. Mar 12 05:10:59.027683 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 05:10:59.027695 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 05:10:59.027707 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 12 05:10:59.027719 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:10:59.027731 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:10:59.027743 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:10:59.027760 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 12 05:10:59.027772 kernel: signal: max sigframe size: 1776 Mar 12 05:10:59.027784 kernel: rcu: Hierarchical SRCU implementation. Mar 12 05:10:59.027830 kernel: rcu: Max phase no-delay instances is 400. Mar 12 05:10:59.027843 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 05:10:59.027855 kernel: smp: Bringing up secondary CPUs ... Mar 12 05:10:59.027866 kernel: smpboot: x86: Booting SMP configuration: Mar 12 05:10:59.027878 kernel: .... node #0, CPUs: #1 Mar 12 05:10:59.027890 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 12 05:10:59.027909 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 05:10:59.027921 kernel: smpboot: Max logical packages: 16 Mar 12 05:10:59.027933 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Mar 12 05:10:59.027945 kernel: devtmpfs: initialized Mar 12 05:10:59.027957 kernel: x86/mm: Memory block size: 128MB Mar 12 05:10:59.027969 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 05:10:59.027981 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 12 05:10:59.027993 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 05:10:59.028005 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 05:10:59.028022 kernel: audit: initializing netlink subsys (disabled) Mar 12 05:10:59.028034 kernel: audit: type=2000 audit(1773292257.308:1): state=initialized audit_enabled=0 res=1 Mar 12 05:10:59.028046 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 05:10:59.028058 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 05:10:59.028070 kernel: cpuidle: using governor menu Mar 12 05:10:59.028082 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 05:10:59.028094 kernel: dca service started, version 1.12.1 Mar 12 05:10:59.028106 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 05:10:59.028118 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 05:10:59.028135 kernel: PCI: Using configuration type 1 for base access Mar 12 05:10:59.028147 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 05:10:59.028159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 05:10:59.028171 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 05:10:59.028183 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 05:10:59.028195 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 05:10:59.028208 kernel: ACPI: Added _OSI(Module Device) Mar 12 05:10:59.028220 kernel: ACPI: Added _OSI(Processor Device) Mar 12 05:10:59.028232 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 05:10:59.028248 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 05:10:59.028260 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 05:10:59.028272 kernel: ACPI: Interpreter enabled Mar 12 05:10:59.028284 kernel: ACPI: PM: (supports S0 S5) Mar 12 05:10:59.028296 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 05:10:59.028308 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 05:10:59.028320 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 05:10:59.028332 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 05:10:59.028344 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 05:10:59.028663 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 05:10:59.028845 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 05:10:59.029012 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 05:10:59.029030 kernel: PCI host bridge to bus 0000:00 Mar 12 05:10:59.029221 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 05:10:59.029377 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 05:10:59.032125 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 05:10:59.032324 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 12 05:10:59.032476 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 05:10:59.032732 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 12 05:10:59.032882 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 05:10:59.033079 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 05:10:59.033273 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 12 05:10:59.033447 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 12 05:10:59.034754 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 12 05:10:59.034960 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 12 05:10:59.035142 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 05:10:59.035346 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.035549 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 12 05:10:59.035762 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.035937 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 12 05:10:59.036143 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.036315 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 12 05:10:59.037087 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.037268 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 12 05:10:59.037451 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.037661 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 12 05:10:59.037847 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.038021 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 12 05:10:59.038194 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.038357 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 12 05:10:59.038560 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 12 05:10:59.038742 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 12 05:10:59.038950 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 12 05:10:59.039117 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 05:10:59.039283 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 12 05:10:59.039449 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 12 05:10:59.039676 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 12 05:10:59.039865 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 12 05:10:59.040033 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 05:10:59.040199 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 12 05:10:59.040365 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 12 05:10:59.040613 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 05:10:59.040785 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 05:10:59.040958 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 05:10:59.041153 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 12 05:10:59.041315 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 12 05:10:59.041487 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 05:10:59.041798 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 05:10:59.041979 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 12 05:10:59.042154 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 12 05:10:59.042325 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 05:10:59.042486 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 05:10:59.042678 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:10:59.042852 kernel: pci_bus 0000:02: extended config space not accessible Mar 12 05:10:59.043047 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 12 05:10:59.043223 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 12 05:10:59.043419 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 05:10:59.043636 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 05:10:59.043829 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 12 05:10:59.044029 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 12 05:10:59.044203 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 05:10:59.044363 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 05:10:59.044551 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:10:59.044738 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 12 05:10:59.044918 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 12 05:10:59.045083 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 05:10:59.045246 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 05:10:59.045411 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:10:59.045607 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 05:10:59.045772 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 05:10:59.045935 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:10:59.046146 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 05:10:59.046346 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 05:10:59.046537 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:10:59.046711 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 05:10:59.046888 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 05:10:59.047060 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:10:59.047216 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 05:10:59.047397 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 05:10:59.047605 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:10:59.047825 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 05:10:59.048002 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 05:10:59.048165 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:10:59.048184 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 05:10:59.048197 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 05:10:59.048209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 05:10:59.048233 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 05:10:59.048245 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 05:10:59.048263 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 05:10:59.048275 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 05:10:59.048287 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 05:10:59.048299 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 05:10:59.048311 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 05:10:59.048322 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 05:10:59.048346 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 05:10:59.048357 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 05:10:59.048368 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 05:10:59.048384 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 05:10:59.048409 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 05:10:59.048421 kernel: iommu: Default domain type: Translated Mar 12 05:10:59.048432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 05:10:59.048444 kernel: PCI: Using ACPI for IRQ routing Mar 12 05:10:59.048468 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 05:10:59.048480 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 05:10:59.048492 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 12 05:10:59.048693 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 05:10:59.048870 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 05:10:59.049036 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 05:10:59.049055 kernel: vgaarb: loaded Mar 12 05:10:59.049067 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 05:10:59.049079 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 05:10:59.049092 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 05:10:59.049104 kernel: pnp: PnP ACPI init Mar 12 05:10:59.049314 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 05:10:59.049340 kernel: pnp: PnP ACPI: found 5 devices Mar 12 05:10:59.049352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 05:10:59.049364 kernel: NET: Registered PF_INET protocol family Mar 12 05:10:59.049377 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 05:10:59.049389 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 12 05:10:59.049401 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 05:10:59.049413 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 12 05:10:59.049425 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 12 05:10:59.049443 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 12 05:10:59.049465 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 05:10:59.049479 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 05:10:59.049491 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 05:10:59.049857 kernel: NET: Registered PF_XDP protocol family Mar 12 05:10:59.050092 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 12 05:10:59.050261 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 12 05:10:59.050423 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 12 05:10:59.050641 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 12 05:10:59.050804 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 12 05:10:59.050965 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 12 05:10:59.051127 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 12 05:10:59.051287 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 12 05:10:59.051448 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 12 05:10:59.051677 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 12 05:10:59.051840 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 12 05:10:59.052002 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 12 05:10:59.052162 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 12 05:10:59.052323 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 12 05:10:59.052506 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 12 05:10:59.052736 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 12 05:10:59.052916 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 05:10:59.053122 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 05:10:59.053284 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 05:10:59.053450 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 12 05:10:59.053642 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 05:10:59.053807 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:10:59.053969 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 05:10:59.054133 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 12 05:10:59.054295 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 05:10:59.054457 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:10:59.054674 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 05:10:59.054849 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 12 05:10:59.055054 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 05:10:59.055236 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:10:59.055400 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 05:10:59.055620 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 12 05:10:59.055791 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 05:10:59.055954 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:10:59.056129 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 05:10:59.056290 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 12 05:10:59.056466 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 05:10:59.056696 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:10:59.056859 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 05:10:59.057060 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 12 05:10:59.057223 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 05:10:59.057394 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:10:59.057604 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 05:10:59.057768 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 12 05:10:59.057929 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 05:10:59.058091 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:10:59.058260 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 05:10:59.058421 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 12 05:10:59.058616 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 05:10:59.058781 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:10:59.058972 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 05:10:59.059125 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 05:10:59.059274 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 05:10:59.059422 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 12 05:10:59.059622 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 05:10:59.059771 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 12 05:10:59.060002 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 12 05:10:59.060160 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 12 05:10:59.060325 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:10:59.060497 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 12 05:10:59.060709 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 12 05:10:59.060881 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 12 05:10:59.061041 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:10:59.061210 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 12 05:10:59.061368 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 12 05:10:59.061608 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:10:59.061777 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 12 05:10:59.061939 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 12 05:10:59.062118 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:10:59.062327 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 12 05:10:59.062505 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 12 05:10:59.062696 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:10:59.062859 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 12 05:10:59.063023 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 12 05:10:59.063196 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:10:59.063358 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 12 05:10:59.063519 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 12 05:10:59.063731 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:10:59.063917 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 12 05:10:59.064081 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 12 05:10:59.064254 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:10:59.064280 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 05:10:59.064293 kernel: PCI: CLS 0 bytes, default 64 Mar 12 05:10:59.064305 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 12 05:10:59.064318 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 12 05:10:59.064331 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 12 05:10:59.064343 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 12 05:10:59.064356 kernel: Initialise system trusted keyrings Mar 12 05:10:59.064368 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 12 05:10:59.064381 kernel: Key type asymmetric registered Mar 12 05:10:59.064398 kernel: Asymmetric key parser 'x509' registered Mar 12 05:10:59.064411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 05:10:59.064423 kernel: io scheduler mq-deadline registered Mar 12 05:10:59.064435 kernel: io scheduler kyber registered Mar 12 05:10:59.064447 kernel: io scheduler bfq registered Mar 12 05:10:59.064681 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 12 05:10:59.064848 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 12 05:10:59.065011 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.065185 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 12 05:10:59.065347 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 12 05:10:59.065522 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.065701 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 12 05:10:59.065864 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 12 05:10:59.066026 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.066202 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 12 05:10:59.066370 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 12 05:10:59.066588 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.066753 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 12 05:10:59.066915 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 12 05:10:59.067076 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.067246 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 12 05:10:59.067427 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 12 05:10:59.067640 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.067806 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 12 05:10:59.067969 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 12 05:10:59.068150 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.068329 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 12 05:10:59.068506 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 12 05:10:59.068709 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:10:59.068730 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 05:10:59.068744 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 05:10:59.068757 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 05:10:59.068770 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 05:10:59.068790 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 05:10:59.068803 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 05:10:59.068816 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 05:10:59.068829 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 05:10:59.068842 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 05:10:59.069030 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 12 05:10:59.069185 kernel: rtc_cmos 00:03: registered as rtc0 Mar 12 05:10:59.069338 kernel: rtc_cmos 00:03: setting system clock to 2026-03-12T05:10:58 UTC (1773292258) Mar 12 05:10:59.069506 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 12 05:10:59.069524 kernel: intel_pstate: CPU model not supported Mar 12 05:10:59.069588 kernel: NET: Registered PF_INET6 protocol family Mar 12 05:10:59.069608 kernel: Segment Routing with IPv6 Mar 12 05:10:59.069621 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 05:10:59.069633 kernel: NET: Registered PF_PACKET protocol family Mar 12 05:10:59.069646 kernel: Key type dns_resolver registered Mar 12 05:10:59.069669 kernel: IPI shorthand broadcast: enabled Mar 12 05:10:59.069683 kernel: sched_clock: Marking stable (1309004152, 223836656)->(1649060904, -116220096) Mar 12 05:10:59.069702 kernel: registered taskstats version 1 Mar 12 05:10:59.069715 kernel: Loading compiled-in X.509 certificates Mar 12 05:10:59.069727 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 05:10:59.069740 kernel: Key type .fscrypt registered Mar 12 05:10:59.069752 kernel: Key type fscrypt-provisioning registered Mar 12 05:10:59.069765 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 05:10:59.069777 kernel: ima: Allocated hash algorithm: sha1 Mar 12 05:10:59.069790 kernel: ima: No architecture policies found Mar 12 05:10:59.069802 kernel: clk: Disabling unused clocks Mar 12 05:10:59.069820 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 05:10:59.069833 kernel: Write protecting the kernel read-only data: 36864k Mar 12 05:10:59.069846 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 05:10:59.069858 kernel: Run /init as init process Mar 12 05:10:59.069871 kernel: with arguments: Mar 12 05:10:59.069883 kernel: /init Mar 12 05:10:59.069896 kernel: with environment: Mar 12 05:10:59.069908 kernel: HOME=/ Mar 12 05:10:59.069920 kernel: TERM=linux Mar 12 05:10:59.069940 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 05:10:59.069956 systemd[1]: Detected virtualization kvm. Mar 12 05:10:59.070013 systemd[1]: Detected architecture x86-64. Mar 12 05:10:59.070026 systemd[1]: Running in initrd. Mar 12 05:10:59.070040 systemd[1]: No hostname configured, using default hostname. Mar 12 05:10:59.070053 systemd[1]: Hostname set to . Mar 12 05:10:59.070066 systemd[1]: Initializing machine ID from VM UUID. Mar 12 05:10:59.070086 systemd[1]: Queued start job for default target initrd.target. Mar 12 05:10:59.070100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:10:59.070113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:10:59.070135 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 05:10:59.070149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 05:10:59.070162 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 05:10:59.070176 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 05:10:59.070196 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 05:10:59.070211 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 05:10:59.070232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:10:59.070246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:10:59.070260 systemd[1]: Reached target paths.target - Path Units. Mar 12 05:10:59.070273 systemd[1]: Reached target slices.target - Slice Units. Mar 12 05:10:59.070295 systemd[1]: Reached target swap.target - Swaps. Mar 12 05:10:59.070308 systemd[1]: Reached target timers.target - Timer Units. Mar 12 05:10:59.070327 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 05:10:59.070341 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 05:10:59.070355 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 05:10:59.070368 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 05:10:59.070382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:10:59.070404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 05:10:59.070417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:10:59.070431 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 05:10:59.070445 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 05:10:59.070464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 05:10:59.070477 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 05:10:59.070491 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 05:10:59.070523 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 05:10:59.070555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 05:10:59.070570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:10:59.070627 systemd-journald[203]: Collecting audit messages is disabled. Mar 12 05:10:59.070664 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 05:10:59.070678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:10:59.070692 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 05:10:59.070711 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 05:10:59.070727 systemd-journald[203]: Journal started Mar 12 05:10:59.070753 systemd-journald[203]: Runtime Journal (/run/log/journal/275910f2c8cb422bb1747970c4a81848) is 4.7M, max 38.0M, 33.2M free. Mar 12 05:10:59.053568 systemd-modules-load[204]: Inserted module 'overlay' Mar 12 05:10:59.130033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 05:10:59.130071 kernel: Bridge firewalling registered Mar 12 05:10:59.098683 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 12 05:10:59.134535 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 05:10:59.135551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 05:10:59.136610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:10:59.145762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:10:59.152727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:10:59.163727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 05:10:59.166569 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 05:10:59.168754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:10:59.180756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 05:10:59.186682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:10:59.197809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 05:10:59.199003 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:10:59.202591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:10:59.208738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 05:10:59.227766 dracut-cmdline[237]: dracut-dracut-053 Mar 12 05:10:59.235025 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:10:59.245705 systemd-resolved[235]: Positive Trust Anchors: Mar 12 05:10:59.245723 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 05:10:59.245775 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 05:10:59.250697 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 12 05:10:59.252353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 05:10:59.253188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:10:59.342554 kernel: SCSI subsystem initialized Mar 12 05:10:59.354604 kernel: Loading iSCSI transport class v2.0-870. Mar 12 05:10:59.367541 kernel: iscsi: registered transport (tcp) Mar 12 05:10:59.393672 kernel: iscsi: registered transport (qla4xxx) Mar 12 05:10:59.393810 kernel: QLogic iSCSI HBA Driver Mar 12 05:10:59.451616 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 05:10:59.465861 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 05:10:59.498170 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 05:10:59.500474 kernel: device-mapper: uevent: version 1.0.3 Mar 12 05:10:59.500497 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 05:10:59.548710 kernel: raid6: sse2x4 gen() 14314 MB/s Mar 12 05:10:59.566548 kernel: raid6: sse2x2 gen() 9303 MB/s Mar 12 05:10:59.585142 kernel: raid6: sse2x1 gen() 10226 MB/s Mar 12 05:10:59.585202 kernel: raid6: using algorithm sse2x4 gen() 14314 MB/s Mar 12 05:10:59.604212 kernel: raid6: .... xor() 7933 MB/s, rmw enabled Mar 12 05:10:59.604283 kernel: raid6: using ssse3x2 recovery algorithm Mar 12 05:10:59.629555 kernel: xor: automatically using best checksumming function avx Mar 12 05:10:59.814546 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 05:10:59.829666 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 05:10:59.840870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:10:59.858305 systemd-udevd[420]: Using default interface naming scheme 'v255'. Mar 12 05:10:59.865077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:10:59.873978 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 05:10:59.898419 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 12 05:10:59.939274 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 05:10:59.946756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 05:11:00.070316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:11:00.080742 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 05:11:00.115591 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 05:11:00.117554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 05:11:00.119005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:11:00.121866 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 05:11:00.133163 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 05:11:00.157040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 05:11:00.208553 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 12 05:11:00.221203 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 05:11:00.221280 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 12 05:11:00.246410 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 05:11:00.246523 kernel: GPT:17805311 != 125829119 Mar 12 05:11:00.246545 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 05:11:00.249868 kernel: GPT:17805311 != 125829119 Mar 12 05:11:00.249910 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 05:11:00.249928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:11:00.257541 kernel: AVX version of gcm_enc/dec engaged. Mar 12 05:11:00.259366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 05:11:00.259594 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:11:00.264831 kernel: AES CTR mode by8 optimization enabled Mar 12 05:11:00.264627 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:11:00.265485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 05:11:00.265685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:11:00.267186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:11:00.280790 kernel: ACPI: bus type USB registered Mar 12 05:11:00.280786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:11:00.286090 kernel: usbcore: registered new interface driver usbfs Mar 12 05:11:00.286128 kernel: usbcore: registered new interface driver hub Mar 12 05:11:00.288528 kernel: usbcore: registered new device driver usb Mar 12 05:11:00.299532 kernel: libata version 3.00 loaded. Mar 12 05:11:00.314309 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 05:11:00.314637 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 05:11:00.324541 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 05:11:00.324844 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 05:11:00.354549 kernel: scsi host0: ahci Mar 12 05:11:00.358541 kernel: scsi host1: ahci Mar 12 05:11:00.358760 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Mar 12 05:11:00.359528 kernel: scsi host2: ahci Mar 12 05:11:00.360575 kernel: scsi host3: ahci Mar 12 05:11:00.365548 kernel: scsi host4: ahci Mar 12 05:11:00.365780 kernel: scsi host5: ahci Mar 12 05:11:00.366563 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 12 05:11:00.366597 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 12 05:11:00.366615 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 12 05:11:00.366632 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 12 05:11:00.366648 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 12 05:11:00.366664 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 12 05:11:00.370591 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 05:11:00.370871 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 12 05:11:00.371077 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 12 05:11:00.373547 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 05:11:00.373778 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 12 05:11:00.373982 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 12 05:11:00.374180 kernel: hub 1-0:1.0: USB hub found Mar 12 05:11:00.374397 kernel: hub 1-0:1.0: 4 ports detected Mar 12 05:11:00.376267 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 12 05:11:00.377532 kernel: hub 2-0:1.0: USB hub found Mar 12 05:11:00.377782 kernel: hub 2-0:1.0: 4 ports detected Mar 12 05:11:00.392529 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (478) Mar 12 05:11:00.396027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 05:11:00.480178 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:11:00.488239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 05:11:00.495050 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 05:11:00.500883 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 05:11:00.501854 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 05:11:00.518900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 05:11:00.523711 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:11:00.527339 disk-uuid[560]: Primary Header is updated. Mar 12 05:11:00.527339 disk-uuid[560]: Secondary Entries is updated. Mar 12 05:11:00.527339 disk-uuid[560]: Secondary Header is updated. Mar 12 05:11:00.535586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:11:00.547803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:11:00.548996 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:11:00.555566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:11:00.618537 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 12 05:11:00.675633 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.678738 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.678774 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.681861 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.681890 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.683417 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 05:11:00.782533 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 05:11:00.788879 kernel: usbcore: registered new interface driver usbhid Mar 12 05:11:00.788921 kernel: usbhid: USB HID core driver Mar 12 05:11:00.796094 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 12 05:11:00.796151 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 12 05:11:01.553982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:11:01.555543 disk-uuid[561]: The operation has completed successfully. Mar 12 05:11:01.613070 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 05:11:01.613265 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 05:11:01.636738 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 05:11:01.643687 sh[589]: Success Mar 12 05:11:01.660530 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 12 05:11:01.736040 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 05:11:01.739177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 05:11:01.740976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 05:11:01.767766 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 05:11:01.767851 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:11:01.767872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 05:11:01.769816 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 05:11:01.772347 kernel: BTRFS info (device dm-0): using free space tree Mar 12 05:11:01.783966 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 05:11:01.785413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 05:11:01.800959 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 05:11:01.804681 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 05:11:01.823101 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:11:01.823160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:11:01.824666 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:11:01.833809 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:11:01.850317 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 05:11:01.853558 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:11:01.860262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 05:11:01.869105 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 05:11:01.980036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 05:11:01.989986 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 05:11:02.012973 ignition[680]: Ignition 2.19.0 Mar 12 05:11:02.013029 ignition[680]: Stage: fetch-offline Mar 12 05:11:02.016565 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 05:11:02.013120 ignition[680]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:02.013144 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:02.013332 ignition[680]: parsed url from cmdline: "" Mar 12 05:11:02.013338 ignition[680]: no config URL provided Mar 12 05:11:02.013347 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 05:11:02.013363 ignition[680]: no config at "/usr/lib/ignition/user.ign" Mar 12 05:11:02.013371 ignition[680]: failed to fetch config: resource requires networking Mar 12 05:11:02.013900 ignition[680]: Ignition finished successfully Mar 12 05:11:02.033094 systemd-networkd[773]: lo: Link UP Mar 12 05:11:02.033111 systemd-networkd[773]: lo: Gained carrier Mar 12 05:11:02.035368 systemd-networkd[773]: Enumeration completed Mar 12 05:11:02.035867 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 05:11:02.035915 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:11:02.035920 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 05:11:02.038557 systemd[1]: Reached target network.target - Network. Mar 12 05:11:02.038909 systemd-networkd[773]: eth0: Link UP Mar 12 05:11:02.038917 systemd-networkd[773]: eth0: Gained carrier Mar 12 05:11:02.038934 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:11:02.045723 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 05:11:02.068189 ignition[777]: Ignition 2.19.0 Mar 12 05:11:02.068208 ignition[777]: Stage: fetch Mar 12 05:11:02.068501 ignition[777]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:02.068555 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:02.068708 ignition[777]: parsed url from cmdline: "" Mar 12 05:11:02.068715 ignition[777]: no config URL provided Mar 12 05:11:02.068724 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 05:11:02.068739 ignition[777]: no config at "/usr/lib/ignition/user.ign" Mar 12 05:11:02.068977 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 12 05:11:02.069004 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 12 05:11:02.069045 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 12 05:11:02.069320 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 12 05:11:02.093629 systemd-networkd[773]: eth0: DHCPv4 address 10.230.44.138/30, gateway 10.230.44.137 acquired from 10.230.44.137 Mar 12 05:11:02.270003 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Mar 12 05:11:02.283760 ignition[777]: GET result: OK Mar 12 05:11:02.283996 ignition[777]: parsing config with SHA512: 2b3d7025ac881b48cdd6cb09c58ea07900e4865ae6377e9de2cebd8fb322ad6004a1fa19cb6c3a9b0bc080916d3440c81b0e5cd294e8a50c2ffe85c89a663ce1 Mar 12 05:11:02.290635 unknown[777]: fetched base config from "system" Mar 12 05:11:02.290653 unknown[777]: fetched base config from "system" Mar 12 05:11:02.290665 unknown[777]: fetched user config from "openstack" Mar 12 05:11:02.291677 ignition[777]: fetch: fetch complete Mar 12 05:11:02.291685 ignition[777]: fetch: fetch passed Mar 12 05:11:02.293726 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 05:11:02.291755 ignition[777]: Ignition finished successfully Mar 12 05:11:02.312940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 05:11:02.332303 ignition[784]: Ignition 2.19.0 Mar 12 05:11:02.332325 ignition[784]: Stage: kargs Mar 12 05:11:02.332619 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:02.336108 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 05:11:02.332639 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:02.333751 ignition[784]: kargs: kargs passed Mar 12 05:11:02.333823 ignition[784]: Ignition finished successfully Mar 12 05:11:02.350884 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 05:11:02.371386 ignition[791]: Ignition 2.19.0 Mar 12 05:11:02.371414 ignition[791]: Stage: disks Mar 12 05:11:02.371813 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:02.371834 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:02.375140 ignition[791]: disks: disks passed Mar 12 05:11:02.375217 ignition[791]: Ignition finished successfully Mar 12 05:11:02.379348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 05:11:02.381228 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 05:11:02.382009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 05:11:02.383617 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 05:11:02.385080 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 05:11:02.386365 systemd[1]: Reached target basic.target - Basic System. Mar 12 05:11:02.393754 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 05:11:02.414211 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 12 05:11:02.420430 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 05:11:02.424646 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 05:11:02.539534 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 05:11:02.540440 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 05:11:02.541820 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 05:11:02.549641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 05:11:02.554381 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 05:11:02.556035 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 05:11:02.563594 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 12 05:11:02.578592 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Mar 12 05:11:02.578656 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:11:02.578677 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:11:02.578695 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:11:02.578712 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:11:02.564432 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 05:11:02.564495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 05:11:02.585145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 05:11:02.586056 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 05:11:02.600072 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 05:11:02.659836 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 05:11:02.670387 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 12 05:11:02.677576 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 05:11:02.683922 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 05:11:02.793827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 05:11:02.800667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 05:11:02.803051 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 05:11:02.816959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 05:11:02.819363 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:11:02.845368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 05:11:02.855238 ignition[923]: INFO : Ignition 2.19.0 Mar 12 05:11:02.856972 ignition[923]: INFO : Stage: mount Mar 12 05:11:02.856972 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:02.856972 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:02.861423 ignition[923]: INFO : mount: mount passed Mar 12 05:11:02.861423 ignition[923]: INFO : Ignition finished successfully Mar 12 05:11:02.861946 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 05:11:03.516795 systemd-networkd[773]: eth0: Gained IPv6LL Mar 12 05:11:05.024763 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8b22:24:19ff:fee6:2c8a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8b22:24:19ff:fee6:2c8a/64 assigned by NDisc. Mar 12 05:11:05.024780 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 05:11:09.750891 coreos-metadata[809]: Mar 12 05:11:09.750 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:11:09.772946 coreos-metadata[809]: Mar 12 05:11:09.772 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 05:11:09.785803 coreos-metadata[809]: Mar 12 05:11:09.785 INFO Fetch successful Mar 12 05:11:09.787032 coreos-metadata[809]: Mar 12 05:11:09.787 INFO wrote hostname srv-ro1yv.gb1.brightbox.com to /sysroot/etc/hostname Mar 12 05:11:09.789941 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 12 05:11:09.790139 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 12 05:11:09.796631 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 05:11:09.824760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 05:11:09.841550 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 12 05:11:09.845538 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:11:09.849077 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:11:09.849114 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:11:09.853526 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:11:09.858097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 05:11:09.891547 ignition[957]: INFO : Ignition 2.19.0 Mar 12 05:11:09.891547 ignition[957]: INFO : Stage: files Mar 12 05:11:09.893384 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:09.893384 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:09.895109 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 12 05:11:09.895109 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 05:11:09.895109 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 05:11:09.899852 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 05:11:09.900904 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 05:11:09.900904 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 05:11:09.900628 unknown[957]: wrote ssh authorized keys file for user: core Mar 12 05:11:09.903818 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 05:11:09.903818 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 05:11:09.903818 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 05:11:09.903818 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 05:11:10.065619 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 05:11:10.450843 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 05:11:10.450843 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 05:11:10.458956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 05:11:10.796341 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 05:11:12.926945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 05:11:12.926945 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 05:11:12.930433 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 05:11:12.930433 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 05:11:12.930433 ignition[957]: INFO : files: files passed Mar 12 05:11:12.930433 ignition[957]: INFO : Ignition finished successfully Mar 12 05:11:12.932422 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 05:11:12.942799 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 05:11:12.952655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 05:11:12.961059 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 05:11:12.961315 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 05:11:12.974602 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:11:12.976337 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:11:12.977822 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:11:12.978750 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 05:11:12.980249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 05:11:12.986714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 05:11:13.029084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 05:11:13.029315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 05:11:13.031455 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 05:11:13.032601 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 05:11:13.034137 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 05:11:13.039776 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 05:11:13.058822 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 05:11:13.064775 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 05:11:13.082103 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:11:13.083132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:11:13.084919 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 05:11:13.086323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 05:11:13.086487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 05:11:13.088303 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 05:11:13.089269 systemd[1]: Stopped target basic.target - Basic System. Mar 12 05:11:13.090720 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 05:11:13.092080 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 05:11:13.093528 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 05:11:13.095945 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 05:11:13.097364 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 05:11:13.098868 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 05:11:13.100318 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 05:11:13.101755 systemd[1]: Stopped target swap.target - Swaps. Mar 12 05:11:13.103001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 05:11:13.103232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 05:11:13.104930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:11:13.105885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:11:13.107386 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 05:11:13.109600 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:11:13.110707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 05:11:13.110886 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 05:11:13.112884 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 05:11:13.113047 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 05:11:13.114893 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 05:11:13.115064 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 05:11:13.123903 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 05:11:13.130183 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 05:11:13.130492 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:11:13.134830 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 05:11:13.136544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 05:11:13.136826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:11:13.139975 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 05:11:13.140150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 05:11:13.148600 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 05:11:13.148757 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 05:11:13.163755 ignition[1010]: INFO : Ignition 2.19.0 Mar 12 05:11:13.166760 ignition[1010]: INFO : Stage: umount Mar 12 05:11:13.166760 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:11:13.166760 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:11:13.166760 ignition[1010]: INFO : umount: umount passed Mar 12 05:11:13.166760 ignition[1010]: INFO : Ignition finished successfully Mar 12 05:11:13.169628 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 05:11:13.169776 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 05:11:13.172232 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 05:11:13.172363 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 05:11:13.173928 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 05:11:13.174006 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 05:11:13.175370 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 05:11:13.175436 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 05:11:13.176784 systemd[1]: Stopped target network.target - Network. Mar 12 05:11:13.178026 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 05:11:13.178103 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 05:11:13.179446 systemd[1]: Stopped target paths.target - Path Units. Mar 12 05:11:13.180766 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 05:11:13.182639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:11:13.183701 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 05:11:13.185133 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 05:11:13.186650 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 05:11:13.186714 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 05:11:13.187938 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 05:11:13.187998 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 05:11:13.189416 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 05:11:13.189495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 05:11:13.191056 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 05:11:13.191124 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 05:11:13.193440 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 05:11:13.194897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 05:11:13.196705 systemd-networkd[773]: eth0: DHCPv6 lease lost Mar 12 05:11:13.199421 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 05:11:13.199611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 05:11:13.202119 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 05:11:13.202293 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 05:11:13.206804 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 05:11:13.207113 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:11:13.214701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 05:11:13.216886 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 05:11:13.216991 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 05:11:13.218480 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 05:11:13.218577 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:11:13.220862 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 05:11:13.220940 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 05:11:13.222186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 05:11:13.222273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:11:13.225668 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:11:13.236605 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 05:11:13.236850 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:11:13.240962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 05:11:13.241060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 05:11:13.242701 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 05:11:13.242758 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:11:13.244330 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 05:11:13.244397 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 05:11:13.246384 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 05:11:13.246448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 05:11:13.247768 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 05:11:13.247858 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:11:13.255365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 05:11:13.258901 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 05:11:13.258998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:11:13.261009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 05:11:13.261088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:11:13.265655 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 05:11:13.265805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 05:11:13.273604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 05:11:13.273791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 05:11:13.280585 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 05:11:13.285684 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 05:11:13.285867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 05:11:13.287489 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 05:11:13.288799 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 05:11:13.288868 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 05:11:13.295727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 05:11:13.307476 systemd[1]: Switching root. Mar 12 05:11:13.344883 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 12 05:11:13.344963 systemd-journald[203]: Journal stopped Mar 12 05:11:14.792314 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 05:11:14.792433 kernel: SELinux: policy capability open_perms=1 Mar 12 05:11:14.792466 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 05:11:14.792484 kernel: SELinux: policy capability always_check_network=0 Mar 12 05:11:14.792514 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 05:11:14.792535 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 05:11:14.792553 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 05:11:14.792570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 05:11:14.792588 kernel: audit: type=1403 audit(1773292273.661:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 05:11:14.792635 systemd[1]: Successfully loaded SELinux policy in 59.480ms. Mar 12 05:11:14.792689 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.552ms. Mar 12 05:11:14.792711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 05:11:14.792745 systemd[1]: Detected virtualization kvm. Mar 12 05:11:14.792764 systemd[1]: Detected architecture x86-64. Mar 12 05:11:14.792783 systemd[1]: Detected first boot. Mar 12 05:11:14.792804 systemd[1]: Hostname set to . Mar 12 05:11:14.792826 systemd[1]: Initializing machine ID from VM UUID. Mar 12 05:11:14.792846 zram_generator::config[1073]: No configuration found. Mar 12 05:11:14.792883 systemd[1]: Populated /etc with preset unit settings. Mar 12 05:11:14.792904 systemd[1]: Queued start job for default target multi-user.target. Mar 12 05:11:14.792923 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 05:11:14.792944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 05:11:14.792964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 05:11:14.792990 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 05:11:14.793016 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 05:11:14.793037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 05:11:14.793068 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 05:11:14.793088 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 05:11:14.793108 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 05:11:14.793126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:11:14.793146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:11:14.793197 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 05:11:14.793233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 05:11:14.793254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 05:11:14.793273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 05:11:14.793304 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 05:11:14.793330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:11:14.793350 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 05:11:14.793369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:11:14.793393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 05:11:14.793413 systemd[1]: Reached target slices.target - Slice Units. Mar 12 05:11:14.793432 systemd[1]: Reached target swap.target - Swaps. Mar 12 05:11:14.793462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 05:11:14.793484 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 05:11:14.795530 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 05:11:14.795601 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 05:11:14.795624 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:11:14.795653 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 05:11:14.795683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:11:14.795706 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 05:11:14.795726 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 05:11:14.795745 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 05:11:14.795773 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 05:11:14.795791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:14.795810 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 05:11:14.795838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 05:11:14.795868 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 05:11:14.795896 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 05:11:14.795916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:11:14.795934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 05:11:14.795969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 05:11:14.795988 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:11:14.796008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 05:11:14.796027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:11:14.796046 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 05:11:14.796082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:11:14.796104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 05:11:14.796124 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 12 05:11:14.796150 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 12 05:11:14.796191 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 05:11:14.796210 kernel: fuse: init (API version 7.39) Mar 12 05:11:14.796229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 05:11:14.796254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 05:11:14.796275 kernel: loop: module loaded Mar 12 05:11:14.796306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 05:11:14.796369 systemd-journald[1180]: Collecting audit messages is disabled. Mar 12 05:11:14.796414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 05:11:14.796436 systemd-journald[1180]: Journal started Mar 12 05:11:14.796472 systemd-journald[1180]: Runtime Journal (/run/log/journal/275910f2c8cb422bb1747970c4a81848) is 4.7M, max 38.0M, 33.2M free. Mar 12 05:11:14.821576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:14.827539 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 05:11:14.829709 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 05:11:14.837068 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 05:11:14.838400 kernel: ACPI: bus type drm_connector registered Mar 12 05:11:14.837996 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 05:11:14.838846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 05:11:14.840080 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 05:11:14.841103 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 05:11:14.847458 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 05:11:14.848786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:11:14.849934 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 05:11:14.850237 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 05:11:14.851434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:11:14.851709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:11:14.852893 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 05:11:14.853128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 05:11:14.854514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:11:14.854760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:11:14.855904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 05:11:14.856206 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 05:11:14.857595 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:11:14.857900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:11:14.860603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 05:11:14.864054 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 05:11:14.865294 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 05:11:14.881184 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 05:11:14.890753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 05:11:14.899043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 05:11:14.901627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 05:11:14.913756 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 05:11:14.927668 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 05:11:14.928623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 05:11:14.935207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 05:11:14.936070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 05:11:14.949708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:11:14.961749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 05:11:14.970811 systemd-journald[1180]: Time spent on flushing to /var/log/journal/275910f2c8cb422bb1747970c4a81848 is 72.706ms for 1125 entries. Mar 12 05:11:14.970811 systemd-journald[1180]: System Journal (/var/log/journal/275910f2c8cb422bb1747970c4a81848) is 8.0M, max 584.8M, 576.8M free. Mar 12 05:11:15.061840 systemd-journald[1180]: Received client request to flush runtime journal. Mar 12 05:11:14.971659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 05:11:14.980426 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 05:11:14.990403 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 05:11:14.995203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 05:11:15.051106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:11:15.064401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 05:11:15.075440 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 12 05:11:15.075465 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 12 05:11:15.094075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 05:11:15.108786 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 05:11:15.144589 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:11:15.155759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 05:11:15.182327 udevadm[1245]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 05:11:15.189552 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 05:11:15.205843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 05:11:15.232044 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Mar 12 05:11:15.232638 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Mar 12 05:11:15.241064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:11:15.665826 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 05:11:15.679890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:11:15.716885 systemd-udevd[1254]: Using default interface naming scheme 'v255'. Mar 12 05:11:15.744644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:11:15.757711 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 05:11:15.783727 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 05:11:15.862679 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 05:11:15.867411 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 12 05:11:15.941561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1269) Mar 12 05:11:16.003076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 05:11:16.015980 systemd-networkd[1260]: lo: Link UP Mar 12 05:11:16.015994 systemd-networkd[1260]: lo: Gained carrier Mar 12 05:11:16.020814 systemd-networkd[1260]: Enumeration completed Mar 12 05:11:16.021003 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 05:11:16.024366 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:11:16.024379 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 05:11:16.027397 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:11:16.027460 systemd-networkd[1260]: eth0: Link UP Mar 12 05:11:16.027467 systemd-networkd[1260]: eth0: Gained carrier Mar 12 05:11:16.027482 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:11:16.031339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 05:11:16.042606 systemd-networkd[1260]: eth0: DHCPv4 address 10.230.44.138/30, gateway 10.230.44.137 acquired from 10.230.44.137 Mar 12 05:11:16.057806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 12 05:11:16.063552 kernel: ACPI: button: Power Button [PWRF] Mar 12 05:11:16.079580 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 05:11:16.111099 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 05:11:16.117085 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 05:11:16.119842 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 05:11:16.132537 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 12 05:11:16.184867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:11:16.383325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:11:16.403348 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 05:11:16.411841 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 05:11:16.433142 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 05:11:16.467337 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 05:11:16.469377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:11:16.475770 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 05:11:16.493011 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 05:11:16.528296 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 05:11:16.530220 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 05:11:16.531337 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 05:11:16.531418 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 05:11:16.532132 systemd[1]: Reached target machines.target - Containers. Mar 12 05:11:16.534750 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 05:11:16.540726 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 05:11:16.543709 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 05:11:16.545821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:11:16.553520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 05:11:16.559688 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 05:11:16.570699 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 05:11:16.576021 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 05:11:16.586735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 05:11:16.601282 kernel: loop0: detected capacity change from 0 to 8 Mar 12 05:11:16.615551 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 05:11:16.629208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 05:11:16.631803 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 05:11:16.645533 kernel: loop1: detected capacity change from 0 to 142488 Mar 12 05:11:16.691712 kernel: loop2: detected capacity change from 0 to 140768 Mar 12 05:11:16.731559 kernel: loop3: detected capacity change from 0 to 228704 Mar 12 05:11:16.779422 kernel: loop4: detected capacity change from 0 to 8 Mar 12 05:11:16.785599 kernel: loop5: detected capacity change from 0 to 142488 Mar 12 05:11:16.805532 kernel: loop6: detected capacity change from 0 to 140768 Mar 12 05:11:16.825555 kernel: loop7: detected capacity change from 0 to 228704 Mar 12 05:11:16.840491 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 12 05:11:16.843556 (sd-merge)[1318]: Merged extensions into '/usr'. Mar 12 05:11:16.850646 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 05:11:16.850688 systemd[1]: Reloading... Mar 12 05:11:16.956558 zram_generator::config[1343]: No configuration found. Mar 12 05:11:17.165871 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 05:11:17.175034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:11:17.273563 systemd[1]: Reloading finished in 420 ms. Mar 12 05:11:17.300462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 05:11:17.307215 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 05:11:17.326793 systemd[1]: Starting ensure-sysext.service... Mar 12 05:11:17.331061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 05:11:17.339704 systemd[1]: Reloading requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Mar 12 05:11:17.339859 systemd[1]: Reloading... Mar 12 05:11:17.381718 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 05:11:17.382306 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 05:11:17.383861 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 05:11:17.384269 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Mar 12 05:11:17.384380 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Mar 12 05:11:17.389426 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 05:11:17.389443 systemd-tmpfiles[1410]: Skipping /boot Mar 12 05:11:17.407893 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 05:11:17.407914 systemd-tmpfiles[1410]: Skipping /boot Mar 12 05:11:17.441582 zram_generator::config[1438]: No configuration found. Mar 12 05:11:17.532760 systemd-networkd[1260]: eth0: Gained IPv6LL Mar 12 05:11:17.640289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:11:17.732465 systemd[1]: Reloading finished in 391 ms. Mar 12 05:11:17.756370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 05:11:17.765300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:11:17.776744 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 05:11:17.780678 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 05:11:17.790729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 05:11:17.797567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 05:11:17.808746 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 05:11:17.825345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:17.825664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:11:17.830847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:11:17.837906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:11:17.850757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:11:17.856745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:11:17.856924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:17.860436 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:11:17.867279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:11:17.871182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:11:17.871441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:11:17.884805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:11:17.885075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:11:17.889396 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 05:11:17.910461 augenrules[1536]: No rules Mar 12 05:11:17.910851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:17.911308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:11:17.925467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:11:17.932853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:11:17.945061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:11:17.947530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:11:17.960634 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 05:11:17.963608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:17.971063 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 05:11:17.973916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 05:11:17.976207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:11:17.977271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:11:17.979718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:11:17.979968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:11:17.982004 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:11:17.982316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:11:17.985214 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 05:11:18.000763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:18.001262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:11:18.006853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:11:18.019352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 05:11:18.032869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:11:18.039853 systemd-resolved[1514]: Positive Trust Anchors: Mar 12 05:11:18.040350 systemd-resolved[1514]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 05:11:18.041732 systemd-resolved[1514]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 05:11:18.047626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:11:18.055995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:11:18.056930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 05:11:18.057698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:11:18.059843 systemd-resolved[1514]: Using system hostname 'srv-ro1yv.gb1.brightbox.com'. Mar 12 05:11:18.064473 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 05:11:18.066687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 05:11:18.068679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:11:18.068927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:11:18.070721 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 05:11:18.071063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 05:11:18.072732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:11:18.073102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:11:18.074646 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:11:18.075049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:11:18.080842 systemd[1]: Finished ensure-sysext.service. Mar 12 05:11:18.087452 systemd[1]: Reached target network.target - Network. Mar 12 05:11:18.088653 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 05:11:18.089452 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:11:18.090359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 05:11:18.090607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 05:11:18.097740 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 05:11:18.189013 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 05:11:18.190674 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 05:11:18.192764 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 05:11:18.193586 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 05:11:18.194352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 05:11:18.195210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 05:11:18.195262 systemd[1]: Reached target paths.target - Path Units. Mar 12 05:11:18.195917 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 05:11:18.196938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 05:11:18.197810 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 05:11:18.198562 systemd[1]: Reached target timers.target - Timer Units. Mar 12 05:11:18.200064 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 05:11:18.202992 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 05:11:18.205767 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 05:11:18.208971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 05:11:18.209795 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 05:11:18.210460 systemd[1]: Reached target basic.target - Basic System. Mar 12 05:11:18.211388 systemd[1]: System is tainted: cgroupsv1 Mar 12 05:11:18.211442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 05:11:18.211493 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 05:11:18.211633 systemd-networkd[1260]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8b22:24:19ff:fee6:2c8a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8b22:24:19ff:fee6:2c8a/64 assigned by NDisc. Mar 12 05:11:18.211640 systemd-networkd[1260]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 05:11:18.218645 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 05:11:18.222447 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 05:11:18.226747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 05:11:18.233852 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 05:11:18.242978 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 05:11:18.244214 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 05:11:18.254643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:11:18.268766 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 05:11:18.272203 jq[1584]: false Mar 12 05:11:18.278772 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 05:11:18.290463 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 05:11:18.299015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 05:11:18.308854 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 05:11:18.308527 dbus-daemon[1583]: [system] SELinux support is enabled Mar 12 05:11:18.327795 dbus-daemon[1583]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1260 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 12 05:11:18.334748 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 05:11:18.339528 extend-filesystems[1586]: Found loop4 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found loop5 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found loop6 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found loop7 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda1 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda2 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda3 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found usr Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda4 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda6 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda7 Mar 12 05:11:18.339528 extend-filesystems[1586]: Found vda9 Mar 12 05:11:18.339528 extend-filesystems[1586]: Checking size of /dev/vda9 Mar 12 05:11:18.391499 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 12 05:11:18.339262 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 05:11:18.391776 extend-filesystems[1586]: Resized partition /dev/vda9 Mar 12 05:11:18.352701 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 05:11:18.398899 extend-filesystems[1614]: resize2fs 1.47.1 (20-May-2024) Mar 12 05:11:18.382700 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 05:11:18.384902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 05:11:18.416113 jq[1617]: true Mar 12 05:11:18.404266 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 05:11:18.404675 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 05:11:18.413137 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 05:11:18.414847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 05:11:18.419974 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 05:11:18.420299 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 05:11:18.440367 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 05:11:18.465710 update_engine[1610]: I20260312 05:11:18.465106 1610 main.cc:92] Flatcar Update Engine starting Mar 12 05:11:18.474756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 05:11:18.474809 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 05:11:18.476630 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 05:11:18.476669 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 05:11:18.480174 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 05:11:18.488527 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1270) Mar 12 05:11:18.493742 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 12 05:11:18.495018 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 05:11:18.498032 systemd[1]: Started update-engine.service - Update Engine. Mar 12 05:11:18.506401 update_engine[1610]: I20260312 05:11:18.505607 1610 update_check_scheduler.cc:74] Next update check in 11m27s Mar 12 05:11:18.499739 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 05:11:18.506936 jq[1626]: true Mar 12 05:11:18.513205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 05:11:18.542848 tar[1624]: linux-amd64/LICENSE Mar 12 05:11:18.542848 tar[1624]: linux-amd64/helm Mar 12 05:11:18.719408 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 12 05:11:18.756760 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 05:11:18.756760 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 12 05:11:18.756760 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 12 05:11:18.762549 extend-filesystems[1586]: Resized filesystem in /dev/vda9 Mar 12 05:11:18.761112 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 05:11:18.761570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 05:11:18.785163 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Mar 12 05:11:18.818314 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 05:11:18.841825 systemd[1]: Starting sshkeys.service... Mar 12 05:11:18.878771 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Mar 12 05:11:18.878820 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 05:11:18.881788 systemd-logind[1604]: New seat seat0. Mar 12 05:11:18.883702 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 05:11:18.915668 containerd[1633]: time="2026-03-12T05:11:18.907578772Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 05:11:18.930863 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 05:11:18.940899 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 05:11:18.942138 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 12 05:11:18.945191 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 12 05:11:18.952344 dbus-daemon[1583]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1640 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 12 05:11:18.965714 containerd[1633]: time="2026-03-12T05:11:18.965619277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.966179 systemd[1]: Starting polkit.service - Authorization Manager... Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.975498899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.978814990Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.978865016Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.979161145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.979194977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.979309106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:11:18.979721 containerd[1633]: time="2026-03-12T05:11:18.979333957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.980066 containerd[1633]: time="2026-03-12T05:11:18.979777161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:11:18.980066 containerd[1633]: time="2026-03-12T05:11:18.979807067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.980066 containerd[1633]: time="2026-03-12T05:11:18.979832142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:11:18.980066 containerd[1633]: time="2026-03-12T05:11:18.979853276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.980066 containerd[1633]: time="2026-03-12T05:11:18.980003679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.983088 containerd[1633]: time="2026-03-12T05:11:18.980385736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:11:18.989811 containerd[1633]: time="2026-03-12T05:11:18.989747138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:11:18.994545 containerd[1633]: time="2026-03-12T05:11:18.993611892Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 05:11:18.994545 containerd[1633]: time="2026-03-12T05:11:18.993878715Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 05:11:18.994545 containerd[1633]: time="2026-03-12T05:11:18.993964211Z" level=info msg="metadata content store policy set" policy=shared Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.005750402Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.005868605Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.005896538Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.005922546Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.005952083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 05:11:19.006189 containerd[1633]: time="2026-03-12T05:11:19.006176685Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 05:11:19.006866 containerd[1633]: time="2026-03-12T05:11:19.006536582Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 05:11:19.006866 containerd[1633]: time="2026-03-12T05:11:19.006768597Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 05:11:19.006866 containerd[1633]: time="2026-03-12T05:11:19.006793445Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 05:11:19.006866 containerd[1633]: time="2026-03-12T05:11:19.006826709Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 05:11:19.006866 containerd[1633]: time="2026-03-12T05:11:19.006859886Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006879425Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006897563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006916571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006937864Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006957940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.006976012Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.007001919Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.007034826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007097 containerd[1633]: time="2026-03-12T05:11:19.007077300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007098487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007119094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007136519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007154616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007171016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007188056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007205837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007247561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007268328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007285662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007323021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007358277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 05:11:19.007397 containerd[1633]: time="2026-03-12T05:11:19.007396294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.010192 containerd[1633]: time="2026-03-12T05:11:19.007418204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.010192 containerd[1633]: time="2026-03-12T05:11:19.007434283Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 05:11:19.016111 containerd[1633]: time="2026-03-12T05:11:19.007493707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 05:11:19.014593 polkitd[1675]: Started polkitd version 121 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.016850504Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.016900257Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.016922938Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.016942571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.016985704Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.017020189Z" level=info msg="NRI interface is disabled by configuration." Mar 12 05:11:19.019658 containerd[1633]: time="2026-03-12T05:11:19.017085824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 05:11:19.019905 containerd[1633]: time="2026-03-12T05:11:19.017499607Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 05:11:19.019905 containerd[1633]: time="2026-03-12T05:11:19.017619338Z" level=info msg="Connect containerd service" Mar 12 05:11:19.019905 containerd[1633]: time="2026-03-12T05:11:19.017673085Z" level=info msg="using legacy CRI server" Mar 12 05:11:19.019905 containerd[1633]: time="2026-03-12T05:11:19.017688506Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 05:11:19.019905 containerd[1633]: time="2026-03-12T05:11:19.017838971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.026676706Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.026865709Z" level=info msg="Start subscribing containerd event" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.026965626Z" level=info msg="Start recovering state" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027120781Z" level=info msg="Start event monitor" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027157028Z" level=info msg="Start snapshots syncer" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027180452Z" level=info msg="Start cni network conf syncer for default" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027194375Z" level=info msg="Start streaming server" Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027428889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 05:11:19.030296 containerd[1633]: time="2026-03-12T05:11:19.027537534Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 05:11:19.027788 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 05:11:19.034734 polkitd[1675]: Loading rules from directory /etc/polkit-1/rules.d Mar 12 05:11:19.034858 polkitd[1675]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 12 05:11:19.035660 containerd[1633]: time="2026-03-12T05:11:19.035628955Z" level=info msg="containerd successfully booted in 0.135647s" Mar 12 05:11:19.037036 polkitd[1675]: Finished loading, compiling and executing 2 rules Mar 12 05:11:19.038770 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 12 05:11:19.040500 polkitd[1675]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 12 05:11:19.038969 systemd[1]: Started polkit.service - Authorization Manager. Mar 12 05:11:19.070809 systemd-hostnamed[1640]: Hostname set to (static) Mar 12 05:11:19.132958 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 05:11:19.224676 sshd_keygen[1625]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 05:11:19.226636 systemd-timesyncd[1576]: Contacted time server 212.71.233.44:123 (0.flatcar.pool.ntp.org). Mar 12 05:11:19.227117 systemd-timesyncd[1576]: Initial clock synchronization to Thu 2026-03-12 05:11:19.569215 UTC. Mar 12 05:11:19.290164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 05:11:19.303706 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 05:11:19.326204 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 05:11:19.327788 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 05:11:19.342240 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 05:11:19.361595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 05:11:19.370532 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 05:11:19.381242 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 05:11:19.385038 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 05:11:19.723583 tar[1624]: linux-amd64/README.md Mar 12 05:11:19.742128 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 05:11:20.007768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:11:20.025292 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:11:20.661222 kubelet[1735]: E0312 05:11:20.661038 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:11:20.663279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:11:20.663611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:11:23.349875 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 05:11:23.356945 systemd[1]: Started sshd@0-10.230.44.138:22-20.161.92.111:47664.service - OpenSSH per-connection server daemon (20.161.92.111:47664). Mar 12 05:11:23.940227 sshd[1746]: Accepted publickey for core from 20.161.92.111 port 47664 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:23.944180 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:23.963244 systemd-logind[1604]: New session 1 of user core. Mar 12 05:11:23.966438 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 05:11:23.978101 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 05:11:24.011997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 05:11:24.026063 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 05:11:24.035390 (systemd)[1752]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 05:11:24.181509 systemd[1752]: Queued start job for default target default.target. Mar 12 05:11:24.182084 systemd[1752]: Created slice app.slice - User Application Slice. Mar 12 05:11:24.182122 systemd[1752]: Reached target paths.target - Paths. Mar 12 05:11:24.182146 systemd[1752]: Reached target timers.target - Timers. Mar 12 05:11:24.190678 systemd[1752]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 05:11:24.199128 systemd[1752]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 05:11:24.199822 systemd[1752]: Reached target sockets.target - Sockets. Mar 12 05:11:24.199855 systemd[1752]: Reached target basic.target - Basic System. Mar 12 05:11:24.199969 systemd[1752]: Reached target default.target - Main User Target. Mar 12 05:11:24.200054 systemd[1752]: Startup finished in 154ms. Mar 12 05:11:24.201192 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 05:11:24.208148 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 05:11:24.433995 login[1721]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 05:11:24.442638 login[1720]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 05:11:24.448787 systemd-logind[1604]: New session 2 of user core. Mar 12 05:11:24.456942 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 05:11:24.469600 systemd-logind[1604]: New session 3 of user core. Mar 12 05:11:24.479696 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 05:11:24.627056 systemd[1]: Started sshd@1-10.230.44.138:22-20.161.92.111:47676.service - OpenSSH per-connection server daemon (20.161.92.111:47676). Mar 12 05:11:25.199560 sshd[1792]: Accepted publickey for core from 20.161.92.111 port 47676 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:25.201725 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:25.208636 systemd-logind[1604]: New session 4 of user core. Mar 12 05:11:25.218096 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 05:11:25.347644 coreos-metadata[1581]: Mar 12 05:11:25.347 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:11:25.372319 coreos-metadata[1581]: Mar 12 05:11:25.372 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 12 05:11:25.378662 coreos-metadata[1581]: Mar 12 05:11:25.378 INFO Fetch failed with 404: resource not found Mar 12 05:11:25.378662 coreos-metadata[1581]: Mar 12 05:11:25.378 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 05:11:25.379210 coreos-metadata[1581]: Mar 12 05:11:25.379 INFO Fetch successful Mar 12 05:11:25.379340 coreos-metadata[1581]: Mar 12 05:11:25.379 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 12 05:11:25.393875 coreos-metadata[1581]: Mar 12 05:11:25.393 INFO Fetch successful Mar 12 05:11:25.394081 coreos-metadata[1581]: Mar 12 05:11:25.394 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 12 05:11:25.408903 coreos-metadata[1581]: Mar 12 05:11:25.408 INFO Fetch successful Mar 12 05:11:25.409087 coreos-metadata[1581]: Mar 12 05:11:25.409 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 12 05:11:25.434546 coreos-metadata[1581]: Mar 12 05:11:25.434 INFO Fetch successful Mar 12 05:11:25.434781 coreos-metadata[1581]: Mar 12 05:11:25.434 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 12 05:11:25.462292 coreos-metadata[1581]: Mar 12 05:11:25.461 INFO Fetch successful Mar 12 05:11:25.499664 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 05:11:25.502429 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 05:11:25.608337 sshd[1792]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:25.613175 systemd[1]: sshd@1-10.230.44.138:22-20.161.92.111:47676.service: Deactivated successfully. Mar 12 05:11:25.617716 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Mar 12 05:11:25.619266 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 05:11:25.621454 systemd-logind[1604]: Removed session 4. Mar 12 05:11:25.702010 systemd[1]: Started sshd@2-10.230.44.138:22-20.161.92.111:47680.service - OpenSSH per-connection server daemon (20.161.92.111:47680). Mar 12 05:11:26.188597 coreos-metadata[1672]: Mar 12 05:11:26.188 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:11:26.211243 coreos-metadata[1672]: Mar 12 05:11:26.210 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 12 05:11:26.233192 coreos-metadata[1672]: Mar 12 05:11:26.233 INFO Fetch successful Mar 12 05:11:26.233414 coreos-metadata[1672]: Mar 12 05:11:26.233 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 12 05:11:26.276565 sshd[1810]: Accepted publickey for core from 20.161.92.111 port 47680 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:26.278326 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:26.279744 coreos-metadata[1672]: Mar 12 05:11:26.279 INFO Fetch successful Mar 12 05:11:26.282656 unknown[1672]: wrote ssh authorized keys file for user: core Mar 12 05:11:26.290600 systemd-logind[1604]: New session 5 of user core. Mar 12 05:11:26.314748 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 05:11:26.340106 update-ssh-keys[1815]: Updated "/home/core/.ssh/authorized_keys" Mar 12 05:11:26.339899 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 05:11:26.349355 systemd[1]: Finished sshkeys.service. Mar 12 05:11:26.353476 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 05:11:26.353980 systemd[1]: Startup finished in 16.388s (kernel) + 12.749s (userspace) = 29.137s. Mar 12 05:11:26.681793 sshd[1810]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:26.686723 systemd[1]: sshd@2-10.230.44.138:22-20.161.92.111:47680.service: Deactivated successfully. Mar 12 05:11:26.688575 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Mar 12 05:11:26.691434 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 05:11:26.693858 systemd-logind[1604]: Removed session 5. Mar 12 05:11:30.914699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 05:11:30.921755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:11:31.109817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:11:31.117999 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:11:31.228568 kubelet[1840]: E0312 05:11:31.227810 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:11:31.232138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:11:31.232510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:11:36.886929 systemd[1]: Started sshd@3-10.230.44.138:22-20.161.92.111:34830.service - OpenSSH per-connection server daemon (20.161.92.111:34830). Mar 12 05:11:37.436559 sshd[1847]: Accepted publickey for core from 20.161.92.111 port 34830 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:37.438234 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:37.445530 systemd-logind[1604]: New session 6 of user core. Mar 12 05:11:37.454115 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 05:11:37.833897 sshd[1847]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:37.838352 systemd[1]: sshd@3-10.230.44.138:22-20.161.92.111:34830.service: Deactivated successfully. Mar 12 05:11:37.843209 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Mar 12 05:11:37.844356 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 05:11:37.846185 systemd-logind[1604]: Removed session 6. Mar 12 05:11:37.930884 systemd[1]: Started sshd@4-10.230.44.138:22-20.161.92.111:34832.service - OpenSSH per-connection server daemon (20.161.92.111:34832). Mar 12 05:11:38.494559 sshd[1855]: Accepted publickey for core from 20.161.92.111 port 34832 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:38.496003 sshd[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:38.502464 systemd-logind[1604]: New session 7 of user core. Mar 12 05:11:38.509988 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 05:11:38.885784 sshd[1855]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:38.895599 systemd[1]: sshd@4-10.230.44.138:22-20.161.92.111:34832.service: Deactivated successfully. Mar 12 05:11:38.899077 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 05:11:38.900861 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Mar 12 05:11:38.902405 systemd-logind[1604]: Removed session 7. Mar 12 05:11:38.982911 systemd[1]: Started sshd@5-10.230.44.138:22-20.161.92.111:34840.service - OpenSSH per-connection server daemon (20.161.92.111:34840). Mar 12 05:11:39.530545 sshd[1863]: Accepted publickey for core from 20.161.92.111 port 34840 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:39.532932 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:39.540186 systemd-logind[1604]: New session 8 of user core. Mar 12 05:11:39.546067 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 05:11:39.927883 sshd[1863]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:39.932225 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Mar 12 05:11:39.933346 systemd[1]: sshd@5-10.230.44.138:22-20.161.92.111:34840.service: Deactivated successfully. Mar 12 05:11:39.938015 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 05:11:39.939811 systemd-logind[1604]: Removed session 8. Mar 12 05:11:40.028998 systemd[1]: Started sshd@6-10.230.44.138:22-20.161.92.111:34856.service - OpenSSH per-connection server daemon (20.161.92.111:34856). Mar 12 05:11:40.580672 sshd[1871]: Accepted publickey for core from 20.161.92.111 port 34856 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:40.582182 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:40.590224 systemd-logind[1604]: New session 9 of user core. Mar 12 05:11:40.605192 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 05:11:40.903460 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 05:11:40.903985 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:11:40.926059 sudo[1875]: pam_unix(sudo:session): session closed for user root Mar 12 05:11:41.014882 sshd[1871]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:41.019620 systemd[1]: sshd@6-10.230.44.138:22-20.161.92.111:34856.service: Deactivated successfully. Mar 12 05:11:41.025250 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Mar 12 05:11:41.026782 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 05:11:41.028801 systemd-logind[1604]: Removed session 9. Mar 12 05:11:41.113969 systemd[1]: Started sshd@7-10.230.44.138:22-20.161.92.111:47552.service - OpenSSH per-connection server daemon (20.161.92.111:47552). Mar 12 05:11:41.482776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 05:11:41.497346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:11:41.669727 sshd[1880]: Accepted publickey for core from 20.161.92.111 port 47552 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:41.668852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:11:41.670924 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:41.681218 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:11:41.687648 systemd-logind[1604]: New session 10 of user core. Mar 12 05:11:41.689897 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 05:11:41.767668 kubelet[1894]: E0312 05:11:41.766668 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:11:41.769819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:11:41.770228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:11:41.979437 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 05:11:41.980544 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:11:41.986316 sudo[1905]: pam_unix(sudo:session): session closed for user root Mar 12 05:11:41.994455 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 05:11:41.995455 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:11:42.014373 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 05:11:42.030547 auditctl[1908]: No rules Mar 12 05:11:42.031659 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 05:11:42.032039 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 05:11:42.040405 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 05:11:42.084131 augenrules[1927]: No rules Mar 12 05:11:42.085920 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 05:11:42.087143 sudo[1904]: pam_unix(sudo:session): session closed for user root Mar 12 05:11:42.175891 sshd[1880]: pam_unix(sshd:session): session closed for user core Mar 12 05:11:42.181246 systemd[1]: sshd@7-10.230.44.138:22-20.161.92.111:47552.service: Deactivated successfully. Mar 12 05:11:42.184869 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Mar 12 05:11:42.185949 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 05:11:42.187361 systemd-logind[1604]: Removed session 10. Mar 12 05:11:42.274070 systemd[1]: Started sshd@8-10.230.44.138:22-20.161.92.111:47554.service - OpenSSH per-connection server daemon (20.161.92.111:47554). Mar 12 05:11:42.823565 sshd[1936]: Accepted publickey for core from 20.161.92.111 port 47554 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:11:42.825501 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:11:42.832282 systemd-logind[1604]: New session 11 of user core. Mar 12 05:11:42.839122 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 05:11:43.133505 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 05:11:43.134034 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:11:43.598290 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 05:11:43.599497 (dockerd)[1956]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 05:11:44.057807 dockerd[1956]: time="2026-03-12T05:11:44.056711135Z" level=info msg="Starting up" Mar 12 05:11:44.170053 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4244738711-merged.mount: Deactivated successfully. Mar 12 05:11:44.348174 dockerd[1956]: time="2026-03-12T05:11:44.347484381Z" level=info msg="Loading containers: start." Mar 12 05:11:44.486660 kernel: Initializing XFRM netlink socket Mar 12 05:11:44.600284 systemd-networkd[1260]: docker0: Link UP Mar 12 05:11:44.623543 dockerd[1956]: time="2026-03-12T05:11:44.623452114Z" level=info msg="Loading containers: done." Mar 12 05:11:44.645251 dockerd[1956]: time="2026-03-12T05:11:44.645189475Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 05:11:44.645449 dockerd[1956]: time="2026-03-12T05:11:44.645371750Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 05:11:44.645666 dockerd[1956]: time="2026-03-12T05:11:44.645630524Z" level=info msg="Daemon has completed initialization" Mar 12 05:11:44.683018 dockerd[1956]: time="2026-03-12T05:11:44.682837310Z" level=info msg="API listen on /run/docker.sock" Mar 12 05:11:44.683255 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 05:11:45.329439 containerd[1633]: time="2026-03-12T05:11:45.328525721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 05:11:46.078566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505545188.mount: Deactivated successfully. Mar 12 05:11:47.976050 containerd[1633]: time="2026-03-12T05:11:47.975902728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:47.979918 containerd[1633]: time="2026-03-12T05:11:47.977873993Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 12 05:11:47.979918 containerd[1633]: time="2026-03-12T05:11:47.979022430Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:47.982972 containerd[1633]: time="2026-03-12T05:11:47.982925359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:47.985259 containerd[1633]: time="2026-03-12T05:11:47.985224625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.656507138s" Mar 12 05:11:47.986010 containerd[1633]: time="2026-03-12T05:11:47.985980404Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 05:11:47.987928 containerd[1633]: time="2026-03-12T05:11:47.987883853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 05:11:49.082054 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 12 05:11:49.955684 containerd[1633]: time="2026-03-12T05:11:49.955605568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:49.957296 containerd[1633]: time="2026-03-12T05:11:49.957042540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 12 05:11:49.959526 containerd[1633]: time="2026-03-12T05:11:49.958749916Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:49.962375 containerd[1633]: time="2026-03-12T05:11:49.962335947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:49.964064 containerd[1633]: time="2026-03-12T05:11:49.964029497Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.976093267s" Mar 12 05:11:49.964199 containerd[1633]: time="2026-03-12T05:11:49.964171986Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 05:11:49.965089 containerd[1633]: time="2026-03-12T05:11:49.965061821Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 05:11:51.746608 containerd[1633]: time="2026-03-12T05:11:51.744696277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:51.748020 containerd[1633]: time="2026-03-12T05:11:51.747229115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 12 05:11:51.748020 containerd[1633]: time="2026-03-12T05:11:51.747335353Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:51.750778 containerd[1633]: time="2026-03-12T05:11:51.750745527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:51.752392 containerd[1633]: time="2026-03-12T05:11:51.752353999Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.786782463s" Mar 12 05:11:51.752477 containerd[1633]: time="2026-03-12T05:11:51.752406870Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 05:11:51.753622 containerd[1633]: time="2026-03-12T05:11:51.753494678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 05:11:52.021190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 05:11:52.032961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:11:52.245750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:11:52.255159 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:11:52.326075 kubelet[2181]: E0312 05:11:52.325853 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:11:52.329028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:11:52.329401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:11:53.552622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957186620.mount: Deactivated successfully. Mar 12 05:11:54.270258 containerd[1633]: time="2026-03-12T05:11:54.270141790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:54.271798 containerd[1633]: time="2026-03-12T05:11:54.271524735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 12 05:11:54.273042 containerd[1633]: time="2026-03-12T05:11:54.272729754Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:54.276095 containerd[1633]: time="2026-03-12T05:11:54.276056819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:54.277256 containerd[1633]: time="2026-03-12T05:11:54.277204253Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.523644282s" Mar 12 05:11:54.277336 containerd[1633]: time="2026-03-12T05:11:54.277284059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 05:11:54.279437 containerd[1633]: time="2026-03-12T05:11:54.279218082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 05:11:54.843545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131264758.mount: Deactivated successfully. Mar 12 05:11:56.423321 containerd[1633]: time="2026-03-12T05:11:56.423133552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:56.426101 containerd[1633]: time="2026-03-12T05:11:56.425987836Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 12 05:11:56.428537 containerd[1633]: time="2026-03-12T05:11:56.427134671Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:56.432889 containerd[1633]: time="2026-03-12T05:11:56.432830684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:56.434739 containerd[1633]: time="2026-03-12T05:11:56.434702052Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.155443018s" Mar 12 05:11:56.434887 containerd[1633]: time="2026-03-12T05:11:56.434860789Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 05:11:56.436967 containerd[1633]: time="2026-03-12T05:11:56.436934242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 05:11:56.973467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985768961.mount: Deactivated successfully. Mar 12 05:11:56.987482 containerd[1633]: time="2026-03-12T05:11:56.987429929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:56.988522 containerd[1633]: time="2026-03-12T05:11:56.988464110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 12 05:11:56.989528 containerd[1633]: time="2026-03-12T05:11:56.989324829Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:56.996903 containerd[1633]: time="2026-03-12T05:11:56.996859765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:57.000566 containerd[1633]: time="2026-03-12T05:11:57.000389359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 563.4131ms" Mar 12 05:11:57.000566 containerd[1633]: time="2026-03-12T05:11:57.000426924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 05:11:57.001522 containerd[1633]: time="2026-03-12T05:11:57.001257593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 05:11:57.538446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488053157.mount: Deactivated successfully. Mar 12 05:11:59.071913 containerd[1633]: time="2026-03-12T05:11:59.071800617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:59.073594 containerd[1633]: time="2026-03-12T05:11:59.072566334Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 12 05:11:59.075113 containerd[1633]: time="2026-03-12T05:11:59.075081029Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:59.080345 containerd[1633]: time="2026-03-12T05:11:59.080275067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:11:59.083175 containerd[1633]: time="2026-03-12T05:11:59.081742888Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.080442412s" Mar 12 05:11:59.083175 containerd[1633]: time="2026-03-12T05:11:59.081801012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 05:12:02.416324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 05:12:02.425777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:12:02.632776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:02.643255 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:12:02.716532 kubelet[2348]: E0312 05:12:02.713879 2348 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:12:02.718799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:12:02.719118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:12:03.339434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:03.349218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:12:03.392079 systemd[1]: Reloading requested from client PID 2365 ('systemctl') (unit session-11.scope)... Mar 12 05:12:03.392120 systemd[1]: Reloading... Mar 12 05:12:03.588692 zram_generator::config[2400]: No configuration found. Mar 12 05:12:03.773889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:12:03.877741 systemd[1]: Reloading finished in 484 ms. Mar 12 05:12:03.935819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 05:12:03.935969 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 05:12:03.936406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:03.945093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:12:04.216730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:04.232462 update_engine[1610]: I20260312 05:12:04.232319 1610 update_attempter.cc:509] Updating boot flags... Mar 12 05:12:04.235248 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 05:12:04.322570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2494) Mar 12 05:12:04.355821 kubelet[2481]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:12:04.355821 kubelet[2481]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 05:12:04.355821 kubelet[2481]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:12:04.357167 kubelet[2481]: I0312 05:12:04.357111 2481 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 05:12:04.395542 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2496) Mar 12 05:12:05.111617 kubelet[2481]: I0312 05:12:05.111389 2481 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 05:12:05.111617 kubelet[2481]: I0312 05:12:05.111439 2481 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 05:12:05.111875 kubelet[2481]: I0312 05:12:05.111732 2481 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 05:12:05.150880 kubelet[2481]: E0312 05:12:05.150811 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.44.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 05:12:05.153396 kubelet[2481]: I0312 05:12:05.152887 2481 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 05:12:05.167173 kubelet[2481]: E0312 05:12:05.167089 2481 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 05:12:05.167173 kubelet[2481]: I0312 05:12:05.167154 2481 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 05:12:05.179776 kubelet[2481]: I0312 05:12:05.178863 2481 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 05:12:05.183367 kubelet[2481]: I0312 05:12:05.182997 2481 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 05:12:05.186722 kubelet[2481]: I0312 05:12:05.183079 2481 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ro1yv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 05:12:05.186722 kubelet[2481]: I0312 05:12:05.186112 2481 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 05:12:05.186722 kubelet[2481]: I0312 05:12:05.186130 2481 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 05:12:05.186722 kubelet[2481]: I0312 05:12:05.186398 2481 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:12:05.192407 kubelet[2481]: I0312 05:12:05.192145 2481 kubelet.go:480] "Attempting to sync node with API server" Mar 12 05:12:05.192407 kubelet[2481]: I0312 05:12:05.192185 2481 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 05:12:05.192407 kubelet[2481]: I0312 05:12:05.192292 2481 kubelet.go:386] "Adding apiserver pod source" Mar 12 05:12:05.194397 kubelet[2481]: I0312 05:12:05.193882 2481 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 05:12:05.198874 kubelet[2481]: E0312 05:12:05.198828 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.44.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ro1yv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 05:12:05.200155 kubelet[2481]: E0312 05:12:05.199566 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.44.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 05:12:05.200155 kubelet[2481]: I0312 05:12:05.199901 2481 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 05:12:05.202526 kubelet[2481]: I0312 05:12:05.200605 2481 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 05:12:05.202526 kubelet[2481]: W0312 05:12:05.201483 2481 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 05:12:05.208964 kubelet[2481]: I0312 05:12:05.208928 2481 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 05:12:05.209082 kubelet[2481]: I0312 05:12:05.208996 2481 server.go:1289] "Started kubelet" Mar 12 05:12:05.211867 kubelet[2481]: I0312 05:12:05.211377 2481 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 05:12:05.212385 kubelet[2481]: I0312 05:12:05.212314 2481 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 05:12:05.212982 kubelet[2481]: I0312 05:12:05.212957 2481 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 05:12:05.213218 kubelet[2481]: I0312 05:12:05.213185 2481 server.go:317] "Adding debug handlers to kubelet server" Mar 12 05:12:05.216110 kubelet[2481]: I0312 05:12:05.216087 2481 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 05:12:05.224397 kubelet[2481]: E0312 05:12:05.221916 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.44.138:6443/api/v1/namespaces/default/events\": dial tcp 10.230.44.138:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ro1yv.gb1.brightbox.com.189bfff97701a2f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ro1yv.gb1.brightbox.com,UID:srv-ro1yv.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ro1yv.gb1.brightbox.com,},FirstTimestamp:2026-03-12 05:12:05.20895768 +0000 UTC m=+0.961643934,LastTimestamp:2026-03-12 05:12:05.20895768 +0000 UTC m=+0.961643934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ro1yv.gb1.brightbox.com,}" Mar 12 05:12:05.225886 kubelet[2481]: I0312 05:12:05.224925 2481 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 05:12:05.232195 kubelet[2481]: I0312 05:12:05.232143 2481 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 05:12:05.233328 kubelet[2481]: E0312 05:12:05.233295 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" Mar 12 05:12:05.234687 kubelet[2481]: I0312 05:12:05.234644 2481 factory.go:223] Registration of the systemd container factory successfully Mar 12 05:12:05.234822 kubelet[2481]: I0312 05:12:05.234803 2481 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 05:12:05.235030 kubelet[2481]: I0312 05:12:05.234801 2481 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 05:12:05.235637 kubelet[2481]: I0312 05:12:05.235602 2481 reconciler.go:26] "Reconciler: start to sync state" Mar 12 05:12:05.236785 kubelet[2481]: E0312 05:12:05.236749 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ro1yv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.138:6443: connect: connection refused" interval="200ms" Mar 12 05:12:05.236994 kubelet[2481]: E0312 05:12:05.236893 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.44.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 05:12:05.238413 kubelet[2481]: E0312 05:12:05.237394 2481 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 05:12:05.238624 kubelet[2481]: I0312 05:12:05.238600 2481 factory.go:223] Registration of the containerd container factory successfully Mar 12 05:12:05.265606 kubelet[2481]: I0312 05:12:05.265544 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 05:12:05.270162 kubelet[2481]: I0312 05:12:05.270132 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 05:12:05.270293 kubelet[2481]: I0312 05:12:05.270258 2481 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 05:12:05.270353 kubelet[2481]: I0312 05:12:05.270305 2481 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 05:12:05.270353 kubelet[2481]: I0312 05:12:05.270327 2481 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 05:12:05.270458 kubelet[2481]: E0312 05:12:05.270398 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 05:12:05.281988 kubelet[2481]: E0312 05:12:05.281946 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.44.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 05:12:05.296111 kubelet[2481]: I0312 05:12:05.296077 2481 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 05:12:05.296358 kubelet[2481]: I0312 05:12:05.296328 2481 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 05:12:05.296540 kubelet[2481]: I0312 05:12:05.296521 2481 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:12:05.298347 kubelet[2481]: I0312 05:12:05.298327 2481 policy_none.go:49] "None policy: Start" Mar 12 05:12:05.298453 kubelet[2481]: I0312 05:12:05.298435 2481 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 05:12:05.298582 kubelet[2481]: I0312 05:12:05.298566 2481 state_mem.go:35] "Initializing new in-memory state store" Mar 12 05:12:05.305554 kubelet[2481]: E0312 05:12:05.305158 2481 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 05:12:05.305554 kubelet[2481]: I0312 05:12:05.305441 2481 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 05:12:05.305554 kubelet[2481]: I0312 05:12:05.305469 2481 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 05:12:05.308078 kubelet[2481]: I0312 05:12:05.308055 2481 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 05:12:05.314370 kubelet[2481]: E0312 05:12:05.314335 2481 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 05:12:05.314671 kubelet[2481]: E0312 05:12:05.314600 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ro1yv.gb1.brightbox.com\" not found" Mar 12 05:12:05.384159 kubelet[2481]: E0312 05:12:05.382065 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.390786 kubelet[2481]: E0312 05:12:05.390702 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.397191 kubelet[2481]: E0312 05:12:05.397113 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.409074 kubelet[2481]: I0312 05:12:05.408674 2481 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.409334 kubelet[2481]: E0312 05:12:05.409246 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.138:6443/api/v1/nodes\": dial tcp 10.230.44.138:6443: connect: connection refused" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.437477 kubelet[2481]: E0312 05:12:05.437411 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ro1yv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.138:6443: connect: connection refused" interval="400ms" Mar 12 05:12:05.438117 kubelet[2481]: I0312 05:12:05.437801 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-flexvolume-dir\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438117 kubelet[2481]: I0312 05:12:05.437841 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438117 kubelet[2481]: I0312 05:12:05.437895 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-k8s-certs\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438117 kubelet[2481]: I0312 05:12:05.437926 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438117 kubelet[2481]: I0312 05:12:05.437950 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-ca-certs\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438411 kubelet[2481]: I0312 05:12:05.437977 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-k8s-certs\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438411 kubelet[2481]: I0312 05:12:05.438003 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-kubeconfig\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438411 kubelet[2481]: I0312 05:12:05.438031 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392b438c7c852243f21ab621d7f9968e-kubeconfig\") pod \"kube-scheduler-srv-ro1yv.gb1.brightbox.com\" (UID: \"392b438c7c852243f21ab621d7f9968e\") " pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.438411 kubelet[2481]: I0312 05:12:05.438081 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-ca-certs\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.612143 kubelet[2481]: I0312 05:12:05.612104 2481 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.612604 kubelet[2481]: E0312 05:12:05.612564 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.138:6443/api/v1/nodes\": dial tcp 10.230.44.138:6443: connect: connection refused" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:05.688094 containerd[1633]: time="2026-03-12T05:12:05.687996896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ro1yv.gb1.brightbox.com,Uid:392b438c7c852243f21ab621d7f9968e,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:05.696161 containerd[1633]: time="2026-03-12T05:12:05.696114196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ro1yv.gb1.brightbox.com,Uid:9caf2c90a2be8ecda43eb2328237dbf9,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:05.698929 containerd[1633]: time="2026-03-12T05:12:05.698584813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ro1yv.gb1.brightbox.com,Uid:1b1d89e99ff416f91f0f724e41e1a7f7,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:05.838858 kubelet[2481]: E0312 05:12:05.838803 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ro1yv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.138:6443: connect: connection refused" interval="800ms" Mar 12 05:12:06.016573 kubelet[2481]: I0312 05:12:06.016047 2481 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:06.017786 kubelet[2481]: E0312 05:12:06.017745 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.138:6443/api/v1/nodes\": dial tcp 10.230.44.138:6443: connect: connection refused" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:06.112596 kubelet[2481]: E0312 05:12:06.112231 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.44.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 05:12:06.121068 kubelet[2481]: E0312 05:12:06.121018 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.44.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 05:12:06.242125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567451064.mount: Deactivated successfully. Mar 12 05:12:06.249869 containerd[1633]: time="2026-03-12T05:12:06.249731850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:12:06.251016 containerd[1633]: time="2026-03-12T05:12:06.250965298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:12:06.252986 containerd[1633]: time="2026-03-12T05:12:06.252931983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 05:12:06.253877 containerd[1633]: time="2026-03-12T05:12:06.253819042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 12 05:12:06.256536 containerd[1633]: time="2026-03-12T05:12:06.255741587Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:12:06.257237 containerd[1633]: time="2026-03-12T05:12:06.257139104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 05:12:06.261277 containerd[1633]: time="2026-03-12T05:12:06.259883033Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:12:06.262482 containerd[1633]: time="2026-03-12T05:12:06.262444298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:12:06.263982 containerd[1633]: time="2026-03-12T05:12:06.263944649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 575.665103ms" Mar 12 05:12:06.267444 containerd[1633]: time="2026-03-12T05:12:06.267335181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.138404ms" Mar 12 05:12:06.269184 containerd[1633]: time="2026-03-12T05:12:06.269151606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.497843ms" Mar 12 05:12:06.539641 containerd[1633]: time="2026-03-12T05:12:06.538754388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:06.539641 containerd[1633]: time="2026-03-12T05:12:06.538861574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:06.540489 containerd[1633]: time="2026-03-12T05:12:06.539968364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.540489 containerd[1633]: time="2026-03-12T05:12:06.540154712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.546953 containerd[1633]: time="2026-03-12T05:12:06.546597532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:06.546953 containerd[1633]: time="2026-03-12T05:12:06.546683317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:06.546953 containerd[1633]: time="2026-03-12T05:12:06.546706372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.546953 containerd[1633]: time="2026-03-12T05:12:06.546832124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.552286 containerd[1633]: time="2026-03-12T05:12:06.552000832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:06.552286 containerd[1633]: time="2026-03-12T05:12:06.552241417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:06.552430 containerd[1633]: time="2026-03-12T05:12:06.552298807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.552654 containerd[1633]: time="2026-03-12T05:12:06.552606334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:06.553052 kubelet[2481]: E0312 05:12:06.552991 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.44.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 05:12:06.642738 kubelet[2481]: E0312 05:12:06.641393 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ro1yv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.138:6443: connect: connection refused" interval="1.6s" Mar 12 05:12:06.688558 containerd[1633]: time="2026-03-12T05:12:06.688199093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ro1yv.gb1.brightbox.com,Uid:9caf2c90a2be8ecda43eb2328237dbf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"04688fc6beebb43efaa4f40a398911b98108d2340f8851d430a9bdfb868695a4\"" Mar 12 05:12:06.692150 containerd[1633]: time="2026-03-12T05:12:06.691428587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ro1yv.gb1.brightbox.com,Uid:1b1d89e99ff416f91f0f724e41e1a7f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8efed44a24e7d6035bd25f3a84beaa75b85a6f33f9bca6381580b60560595ca4\"" Mar 12 05:12:06.704914 containerd[1633]: time="2026-03-12T05:12:06.704832347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ro1yv.gb1.brightbox.com,Uid:392b438c7c852243f21ab621d7f9968e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d062f1b452732dcc75e1ead3d4cac7ffe50755fd24b5f05ecfde6d7d22a7e6c\"" Mar 12 05:12:06.709229 containerd[1633]: time="2026-03-12T05:12:06.709016393Z" level=info msg="CreateContainer within sandbox \"04688fc6beebb43efaa4f40a398911b98108d2340f8851d430a9bdfb868695a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 05:12:06.712084 containerd[1633]: time="2026-03-12T05:12:06.711958980Z" level=info msg="CreateContainer within sandbox \"8efed44a24e7d6035bd25f3a84beaa75b85a6f33f9bca6381580b60560595ca4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 05:12:06.714531 containerd[1633]: time="2026-03-12T05:12:06.714417862Z" level=info msg="CreateContainer within sandbox \"0d062f1b452732dcc75e1ead3d4cac7ffe50755fd24b5f05ecfde6d7d22a7e6c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 05:12:06.729939 containerd[1633]: time="2026-03-12T05:12:06.729865562Z" level=info msg="CreateContainer within sandbox \"8efed44a24e7d6035bd25f3a84beaa75b85a6f33f9bca6381580b60560595ca4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4408c3c257471a9ea262e878a9cadcd09c69805cbd39eb673061e445d1b25f0f\"" Mar 12 05:12:06.730986 containerd[1633]: time="2026-03-12T05:12:06.730736823Z" level=info msg="StartContainer for \"4408c3c257471a9ea262e878a9cadcd09c69805cbd39eb673061e445d1b25f0f\"" Mar 12 05:12:06.737592 containerd[1633]: time="2026-03-12T05:12:06.737383234Z" level=info msg="CreateContainer within sandbox \"04688fc6beebb43efaa4f40a398911b98108d2340f8851d430a9bdfb868695a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edab10279a43004c30135bbd8b4751b40f0de2a48baac7b355765f01fd763e75\"" Mar 12 05:12:06.739548 containerd[1633]: time="2026-03-12T05:12:06.738231625Z" level=info msg="StartContainer for \"edab10279a43004c30135bbd8b4751b40f0de2a48baac7b355765f01fd763e75\"" Mar 12 05:12:06.742285 containerd[1633]: time="2026-03-12T05:12:06.742250517Z" level=info msg="CreateContainer within sandbox \"0d062f1b452732dcc75e1ead3d4cac7ffe50755fd24b5f05ecfde6d7d22a7e6c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d2258a8e3fbaf8aa24c88a203bc1635b5b5687b211bb2b38aabe294dd6494db\"" Mar 12 05:12:06.743432 containerd[1633]: time="2026-03-12T05:12:06.743403857Z" level=info msg="StartContainer for \"0d2258a8e3fbaf8aa24c88a203bc1635b5b5687b211bb2b38aabe294dd6494db\"" Mar 12 05:12:06.782403 kubelet[2481]: E0312 05:12:06.782355 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.44.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ro1yv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 05:12:06.825598 kubelet[2481]: I0312 05:12:06.824811 2481 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:06.828609 kubelet[2481]: E0312 05:12:06.828431 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.138:6443/api/v1/nodes\": dial tcp 10.230.44.138:6443: connect: connection refused" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:06.887301 containerd[1633]: time="2026-03-12T05:12:06.885064428Z" level=info msg="StartContainer for \"4408c3c257471a9ea262e878a9cadcd09c69805cbd39eb673061e445d1b25f0f\" returns successfully" Mar 12 05:12:06.897733 containerd[1633]: time="2026-03-12T05:12:06.896250880Z" level=info msg="StartContainer for \"edab10279a43004c30135bbd8b4751b40f0de2a48baac7b355765f01fd763e75\" returns successfully" Mar 12 05:12:06.914326 containerd[1633]: time="2026-03-12T05:12:06.914278346Z" level=info msg="StartContainer for \"0d2258a8e3fbaf8aa24c88a203bc1635b5b5687b211bb2b38aabe294dd6494db\" returns successfully" Mar 12 05:12:07.274273 kubelet[2481]: E0312 05:12:07.274192 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.44.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.44.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 05:12:07.306214 kubelet[2481]: E0312 05:12:07.304482 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:07.310622 kubelet[2481]: E0312 05:12:07.309922 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:07.327922 kubelet[2481]: E0312 05:12:07.327665 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:08.324454 kubelet[2481]: E0312 05:12:08.324409 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:08.325125 kubelet[2481]: E0312 05:12:08.324917 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:08.433087 kubelet[2481]: I0312 05:12:08.433048 2481 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:09.325732 kubelet[2481]: E0312 05:12:09.325639 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.377912 kubelet[2481]: E0312 05:12:10.377826 2481 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ro1yv.gb1.brightbox.com\" not found" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.455960 kubelet[2481]: I0312 05:12:10.455884 2481 kubelet_node_status.go:78] "Successfully registered node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.456168 kubelet[2481]: E0312 05:12:10.455970 2481 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-ro1yv.gb1.brightbox.com\": node \"srv-ro1yv.gb1.brightbox.com\" not found" Mar 12 05:12:10.535285 kubelet[2481]: I0312 05:12:10.535116 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.547279 kubelet[2481]: E0312 05:12:10.545771 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.547279 kubelet[2481]: I0312 05:12:10.545822 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.547610 kubelet[2481]: E0312 05:12:10.547572 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ro1yv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.547610 kubelet[2481]: I0312 05:12:10.547606 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:10.549682 kubelet[2481]: E0312 05:12:10.549649 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:11.203213 kubelet[2481]: I0312 05:12:11.203102 2481 apiserver.go:52] "Watching apiserver" Mar 12 05:12:11.235920 kubelet[2481]: I0312 05:12:11.235819 2481 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 05:12:12.186074 kubelet[2481]: I0312 05:12:12.185995 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:12.195490 kubelet[2481]: I0312 05:12:12.195330 2481 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:12.717964 systemd[1]: Reloading requested from client PID 2777 ('systemctl') (unit session-11.scope)... Mar 12 05:12:12.718014 systemd[1]: Reloading... Mar 12 05:12:12.834566 zram_generator::config[2816]: No configuration found. Mar 12 05:12:13.031188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:12:13.153108 systemd[1]: Reloading finished in 434 ms. Mar 12 05:12:13.205932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:12:13.220931 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 05:12:13.221612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:13.229278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:12:13.445736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:12:13.460228 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 05:12:13.559747 kubelet[2890]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:12:13.561425 kubelet[2890]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 05:12:13.561425 kubelet[2890]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:12:13.561425 kubelet[2890]: I0312 05:12:13.560384 2890 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 05:12:13.569366 kubelet[2890]: I0312 05:12:13.569330 2890 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 05:12:13.569557 kubelet[2890]: I0312 05:12:13.569539 2890 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 05:12:13.569927 kubelet[2890]: I0312 05:12:13.569896 2890 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 05:12:13.571761 kubelet[2890]: I0312 05:12:13.571739 2890 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 05:12:13.576481 kubelet[2890]: I0312 05:12:13.576392 2890 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 05:12:13.589351 kubelet[2890]: E0312 05:12:13.589305 2890 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 05:12:13.589631 kubelet[2890]: I0312 05:12:13.589613 2890 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 05:12:13.595341 kubelet[2890]: I0312 05:12:13.595302 2890 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 05:12:13.596330 kubelet[2890]: I0312 05:12:13.596285 2890 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 05:12:13.596712 kubelet[2890]: I0312 05:12:13.596444 2890 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ro1yv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 05:12:13.597286 kubelet[2890]: I0312 05:12:13.597036 2890 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 05:12:13.597286 kubelet[2890]: I0312 05:12:13.597109 2890 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 05:12:13.597286 kubelet[2890]: I0312 05:12:13.597222 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:12:13.597718 kubelet[2890]: I0312 05:12:13.597699 2890 kubelet.go:480] "Attempting to sync node with API server" Mar 12 05:12:13.597994 kubelet[2890]: I0312 05:12:13.597831 2890 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 05:12:13.597994 kubelet[2890]: I0312 05:12:13.597887 2890 kubelet.go:386] "Adding apiserver pod source" Mar 12 05:12:13.597994 kubelet[2890]: I0312 05:12:13.597919 2890 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 05:12:13.604806 kubelet[2890]: I0312 05:12:13.604766 2890 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 05:12:13.606055 kubelet[2890]: I0312 05:12:13.605703 2890 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 05:12:13.615044 kubelet[2890]: I0312 05:12:13.614714 2890 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 05:12:13.615044 kubelet[2890]: I0312 05:12:13.614783 2890 server.go:1289] "Started kubelet" Mar 12 05:12:13.619408 kubelet[2890]: I0312 05:12:13.619384 2890 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 05:12:13.636632 kubelet[2890]: I0312 05:12:13.636484 2890 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 05:12:13.639163 kubelet[2890]: I0312 05:12:13.639141 2890 server.go:317] "Adding debug handlers to kubelet server" Mar 12 05:12:13.644492 kubelet[2890]: I0312 05:12:13.643981 2890 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 05:12:13.644492 kubelet[2890]: I0312 05:12:13.644322 2890 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 05:12:13.645036 kubelet[2890]: I0312 05:12:13.644992 2890 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 05:12:13.649018 kubelet[2890]: I0312 05:12:13.648998 2890 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 05:12:13.654328 kubelet[2890]: I0312 05:12:13.654297 2890 factory.go:223] Registration of the systemd container factory successfully Mar 12 05:12:13.654935 kubelet[2890]: I0312 05:12:13.654907 2890 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 05:12:13.659170 kubelet[2890]: I0312 05:12:13.659144 2890 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 05:12:13.661012 kubelet[2890]: I0312 05:12:13.660754 2890 reconciler.go:26] "Reconciler: start to sync state" Mar 12 05:12:13.661577 kubelet[2890]: E0312 05:12:13.661488 2890 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 05:12:13.669264 kubelet[2890]: I0312 05:12:13.669238 2890 factory.go:223] Registration of the containerd container factory successfully Mar 12 05:12:13.673613 kubelet[2890]: I0312 05:12:13.673546 2890 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 05:12:13.678465 kubelet[2890]: I0312 05:12:13.676859 2890 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 05:12:13.678465 kubelet[2890]: I0312 05:12:13.676900 2890 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 05:12:13.678465 kubelet[2890]: I0312 05:12:13.676939 2890 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 05:12:13.678465 kubelet[2890]: I0312 05:12:13.676964 2890 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 05:12:13.678465 kubelet[2890]: E0312 05:12:13.677028 2890 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 05:12:13.768549 kubelet[2890]: I0312 05:12:13.768381 2890 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 05:12:13.768549 kubelet[2890]: I0312 05:12:13.768412 2890 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 05:12:13.768549 kubelet[2890]: I0312 05:12:13.768448 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:12:13.768790 kubelet[2890]: I0312 05:12:13.768749 2890 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 05:12:13.768790 kubelet[2890]: I0312 05:12:13.768769 2890 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 05:12:13.768892 kubelet[2890]: I0312 05:12:13.768803 2890 policy_none.go:49] "None policy: Start" Mar 12 05:12:13.768892 kubelet[2890]: I0312 05:12:13.768834 2890 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 05:12:13.768892 kubelet[2890]: I0312 05:12:13.768860 2890 state_mem.go:35] "Initializing new in-memory state store" Mar 12 05:12:13.769082 kubelet[2890]: I0312 05:12:13.769065 2890 state_mem.go:75] "Updated machine memory state" Mar 12 05:12:13.775548 kubelet[2890]: E0312 05:12:13.772331 2890 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 05:12:13.775548 kubelet[2890]: I0312 05:12:13.773580 2890 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 05:12:13.775548 kubelet[2890]: I0312 05:12:13.773621 2890 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 05:12:13.779697 kubelet[2890]: I0312 05:12:13.779669 2890 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 05:12:13.782089 kubelet[2890]: I0312 05:12:13.782055 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.784017 kubelet[2890]: I0312 05:12:13.783993 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.784572 kubelet[2890]: I0312 05:12:13.784552 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.790821 kubelet[2890]: E0312 05:12:13.790787 2890 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 05:12:13.803602 kubelet[2890]: I0312 05:12:13.803561 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:13.819730 kubelet[2890]: I0312 05:12:13.819677 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:13.820199 kubelet[2890]: I0312 05:12:13.820179 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:13.820427 kubelet[2890]: E0312 05:12:13.820400 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.907645 kubelet[2890]: I0312 05:12:13.907576 2890 kubelet_node_status.go:75] "Attempting to register node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.920603 kubelet[2890]: I0312 05:12:13.919472 2890 kubelet_node_status.go:124] "Node was previously registered" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.922053 kubelet[2890]: I0312 05:12:13.920662 2890 kubelet_node_status.go:78] "Successfully registered node" node="srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.962787 kubelet[2890]: I0312 05:12:13.962742 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-ca-certs\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963264 kubelet[2890]: I0312 05:12:13.963070 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-k8s-certs\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963264 kubelet[2890]: I0312 05:12:13.963134 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-kubeconfig\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963264 kubelet[2890]: I0312 05:12:13.963168 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963264 kubelet[2890]: I0312 05:12:13.963216 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-ca-certs\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963698 kubelet[2890]: I0312 05:12:13.963244 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-k8s-certs\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963698 kubelet[2890]: I0312 05:12:13.963553 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9caf2c90a2be8ecda43eb2328237dbf9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" (UID: \"9caf2c90a2be8ecda43eb2328237dbf9\") " pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963698 kubelet[2890]: I0312 05:12:13.963604 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b1d89e99ff416f91f0f724e41e1a7f7-flexvolume-dir\") pod \"kube-controller-manager-srv-ro1yv.gb1.brightbox.com\" (UID: \"1b1d89e99ff416f91f0f724e41e1a7f7\") " pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:13.963698 kubelet[2890]: I0312 05:12:13.963633 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392b438c7c852243f21ab621d7f9968e-kubeconfig\") pod \"kube-scheduler-srv-ro1yv.gb1.brightbox.com\" (UID: \"392b438c7c852243f21ab621d7f9968e\") " pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:14.600157 kubelet[2890]: I0312 05:12:14.600108 2890 apiserver.go:52] "Watching apiserver" Mar 12 05:12:14.661267 kubelet[2890]: I0312 05:12:14.661180 2890 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 05:12:14.724123 kubelet[2890]: I0312 05:12:14.723095 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:14.724946 kubelet[2890]: I0312 05:12:14.724925 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:14.734708 kubelet[2890]: I0312 05:12:14.734617 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:14.735239 kubelet[2890]: E0312 05:12:14.734884 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ro1yv.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:14.752213 kubelet[2890]: I0312 05:12:14.752169 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:12:14.753691 kubelet[2890]: E0312 05:12:14.752903 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ro1yv.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" Mar 12 05:12:14.772091 kubelet[2890]: I0312 05:12:14.771988 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ro1yv.gb1.brightbox.com" podStartSLOduration=1.7719323839999999 podStartE2EDuration="1.771932384s" podCreationTimestamp="2026-03-12 05:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:12:14.771709726 +0000 UTC m=+1.281432955" watchObservedRunningTime="2026-03-12 05:12:14.771932384 +0000 UTC m=+1.281655597" Mar 12 05:12:14.795375 kubelet[2890]: I0312 05:12:14.795152 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ro1yv.gb1.brightbox.com" podStartSLOduration=2.79513022 podStartE2EDuration="2.79513022s" podCreationTimestamp="2026-03-12 05:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:12:14.78175228 +0000 UTC m=+1.291475520" watchObservedRunningTime="2026-03-12 05:12:14.79513022 +0000 UTC m=+1.304853437" Mar 12 05:12:14.807932 kubelet[2890]: I0312 05:12:14.807749 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ro1yv.gb1.brightbox.com" podStartSLOduration=1.8077319219999999 podStartE2EDuration="1.807731922s" podCreationTimestamp="2026-03-12 05:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:12:14.79585039 +0000 UTC m=+1.305573624" watchObservedRunningTime="2026-03-12 05:12:14.807731922 +0000 UTC m=+1.317455142" Mar 12 05:12:16.215020 systemd[1]: Started sshd@9-10.230.44.138:22-187.85.155.182:46646.service - OpenSSH per-connection server daemon (187.85.155.182:46646). Mar 12 05:12:17.354185 sshd[2944]: Invalid user mob from 187.85.155.182 port 46646 Mar 12 05:12:17.564975 sshd[2944]: Received disconnect from 187.85.155.182 port 46646:11: Bye Bye [preauth] Mar 12 05:12:17.564975 sshd[2944]: Disconnected from invalid user mob 187.85.155.182 port 46646 [preauth] Mar 12 05:12:17.569493 systemd[1]: sshd@9-10.230.44.138:22-187.85.155.182:46646.service: Deactivated successfully. Mar 12 05:12:19.405131 kubelet[2890]: I0312 05:12:19.404877 2890 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 05:12:19.406663 kubelet[2890]: I0312 05:12:19.405880 2890 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 05:12:19.406740 containerd[1633]: time="2026-03-12T05:12:19.405543110Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 05:12:20.506910 kubelet[2890]: I0312 05:12:20.506694 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad4f2a04-a640-4886-b780-42b6210b0c85-kube-proxy\") pod \"kube-proxy-8mrmt\" (UID: \"ad4f2a04-a640-4886-b780-42b6210b0c85\") " pod="kube-system/kube-proxy-8mrmt" Mar 12 05:12:20.506910 kubelet[2890]: I0312 05:12:20.506751 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncppk\" (UniqueName: \"kubernetes.io/projected/ad4f2a04-a640-4886-b780-42b6210b0c85-kube-api-access-ncppk\") pod \"kube-proxy-8mrmt\" (UID: \"ad4f2a04-a640-4886-b780-42b6210b0c85\") " pod="kube-system/kube-proxy-8mrmt" Mar 12 05:12:20.506910 kubelet[2890]: I0312 05:12:20.506785 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad4f2a04-a640-4886-b780-42b6210b0c85-xtables-lock\") pod \"kube-proxy-8mrmt\" (UID: \"ad4f2a04-a640-4886-b780-42b6210b0c85\") " pod="kube-system/kube-proxy-8mrmt" Mar 12 05:12:20.506910 kubelet[2890]: I0312 05:12:20.506814 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad4f2a04-a640-4886-b780-42b6210b0c85-lib-modules\") pod \"kube-proxy-8mrmt\" (UID: \"ad4f2a04-a640-4886-b780-42b6210b0c85\") " pod="kube-system/kube-proxy-8mrmt" Mar 12 05:12:20.604197 kubelet[2890]: I0312 05:12:20.603698 2890 status_manager.go:895] "Failed to get status for pod" podUID="0c46435f-5ec4-4d2f-9780-7fa34fd4064f" pod="tigera-operator/tigera-operator-6bf85f8dd-2gjq2" err="pods \"tigera-operator-6bf85f8dd-2gjq2\" is forbidden: User \"system:node:srv-ro1yv.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-ro1yv.gb1.brightbox.com' and this object" Mar 12 05:12:20.607046 kubelet[2890]: I0312 05:12:20.606987 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9j7v\" (UniqueName: \"kubernetes.io/projected/0c46435f-5ec4-4d2f-9780-7fa34fd4064f-kube-api-access-f9j7v\") pod \"tigera-operator-6bf85f8dd-2gjq2\" (UID: \"0c46435f-5ec4-4d2f-9780-7fa34fd4064f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-2gjq2" Mar 12 05:12:20.607874 kubelet[2890]: I0312 05:12:20.607112 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c46435f-5ec4-4d2f-9780-7fa34fd4064f-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-2gjq2\" (UID: \"0c46435f-5ec4-4d2f-9780-7fa34fd4064f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-2gjq2" Mar 12 05:12:20.775692 containerd[1633]: time="2026-03-12T05:12:20.775564457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mrmt,Uid:ad4f2a04-a640-4886-b780-42b6210b0c85,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:20.818690 containerd[1633]: time="2026-03-12T05:12:20.818111448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:20.818690 containerd[1633]: time="2026-03-12T05:12:20.818248630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:20.818690 containerd[1633]: time="2026-03-12T05:12:20.818315834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:20.818690 containerd[1633]: time="2026-03-12T05:12:20.818529227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:20.879049 containerd[1633]: time="2026-03-12T05:12:20.878915100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mrmt,Uid:ad4f2a04-a640-4886-b780-42b6210b0c85,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e45734cc20839767feed70ab1dc62b1a691f7c497538bae52c7efa6e9194c31\"" Mar 12 05:12:20.887874 containerd[1633]: time="2026-03-12T05:12:20.887684199Z" level=info msg="CreateContainer within sandbox \"0e45734cc20839767feed70ab1dc62b1a691f7c497538bae52c7efa6e9194c31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 05:12:20.901936 containerd[1633]: time="2026-03-12T05:12:20.901874435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-2gjq2,Uid:0c46435f-5ec4-4d2f-9780-7fa34fd4064f,Namespace:tigera-operator,Attempt:0,}" Mar 12 05:12:20.908076 containerd[1633]: time="2026-03-12T05:12:20.907820364Z" level=info msg="CreateContainer within sandbox \"0e45734cc20839767feed70ab1dc62b1a691f7c497538bae52c7efa6e9194c31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"45c930cc454c105f08ad3fe5a8998de003b5419bafbc585696dc4f6e31840c37\"" Mar 12 05:12:20.911420 containerd[1633]: time="2026-03-12T05:12:20.911325516Z" level=info msg="StartContainer for \"45c930cc454c105f08ad3fe5a8998de003b5419bafbc585696dc4f6e31840c37\"" Mar 12 05:12:20.966744 containerd[1633]: time="2026-03-12T05:12:20.966565281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:20.968789 containerd[1633]: time="2026-03-12T05:12:20.968099619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:20.968789 containerd[1633]: time="2026-03-12T05:12:20.968190196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:20.968789 containerd[1633]: time="2026-03-12T05:12:20.968687328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:21.007324 containerd[1633]: time="2026-03-12T05:12:21.007173560Z" level=info msg="StartContainer for \"45c930cc454c105f08ad3fe5a8998de003b5419bafbc585696dc4f6e31840c37\" returns successfully" Mar 12 05:12:21.077977 containerd[1633]: time="2026-03-12T05:12:21.076667324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-2gjq2,Uid:0c46435f-5ec4-4d2f-9780-7fa34fd4064f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"050b36e64652f1e4d77ec669af0a7dbc065ff073823d7e9696699f3246c1645a\"" Mar 12 05:12:21.081217 containerd[1633]: time="2026-03-12T05:12:21.080853704Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 05:12:21.629279 systemd[1]: run-containerd-runc-k8s.io-0e45734cc20839767feed70ab1dc62b1a691f7c497538bae52c7efa6e9194c31-runc.wcVnml.mount: Deactivated successfully. Mar 12 05:12:21.757843 kubelet[2890]: I0312 05:12:21.757727 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mrmt" podStartSLOduration=1.7576853190000001 podStartE2EDuration="1.757685319s" podCreationTimestamp="2026-03-12 05:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:12:21.755322528 +0000 UTC m=+8.265045755" watchObservedRunningTime="2026-03-12 05:12:21.757685319 +0000 UTC m=+8.267408540" Mar 12 05:12:22.629017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531002014.mount: Deactivated successfully. Mar 12 05:12:24.402231 containerd[1633]: time="2026-03-12T05:12:24.402079598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:24.404461 containerd[1633]: time="2026-03-12T05:12:24.404362790Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 05:12:24.405695 containerd[1633]: time="2026-03-12T05:12:24.405652972Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:24.409240 containerd[1633]: time="2026-03-12T05:12:24.409152779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:24.411556 containerd[1633]: time="2026-03-12T05:12:24.411482950Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.330544121s" Mar 12 05:12:24.411632 containerd[1633]: time="2026-03-12T05:12:24.411558934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 05:12:24.417794 containerd[1633]: time="2026-03-12T05:12:24.417735236Z" level=info msg="CreateContainer within sandbox \"050b36e64652f1e4d77ec669af0a7dbc065ff073823d7e9696699f3246c1645a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 05:12:24.432628 containerd[1633]: time="2026-03-12T05:12:24.432577300Z" level=info msg="CreateContainer within sandbox \"050b36e64652f1e4d77ec669af0a7dbc065ff073823d7e9696699f3246c1645a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"199ecd4bf005c252d817db2f03bab6ea6e2185d65fcdec979dfaaeb583d48020\"" Mar 12 05:12:24.433448 containerd[1633]: time="2026-03-12T05:12:24.433398556Z" level=info msg="StartContainer for \"199ecd4bf005c252d817db2f03bab6ea6e2185d65fcdec979dfaaeb583d48020\"" Mar 12 05:12:24.527830 containerd[1633]: time="2026-03-12T05:12:24.527740717Z" level=info msg="StartContainer for \"199ecd4bf005c252d817db2f03bab6ea6e2185d65fcdec979dfaaeb583d48020\" returns successfully" Mar 12 05:12:24.767271 kubelet[2890]: I0312 05:12:24.767112 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-2gjq2" podStartSLOduration=1.434645871 podStartE2EDuration="4.767056207s" podCreationTimestamp="2026-03-12 05:12:20 +0000 UTC" firstStartedPulling="2026-03-12 05:12:21.080080855 +0000 UTC m=+7.589804069" lastFinishedPulling="2026-03-12 05:12:24.412491192 +0000 UTC m=+10.922214405" observedRunningTime="2026-03-12 05:12:24.766769845 +0000 UTC m=+11.276493075" watchObservedRunningTime="2026-03-12 05:12:24.767056207 +0000 UTC m=+11.276779427" Mar 12 05:12:32.091967 sudo[1940]: pam_unix(sudo:session): session closed for user root Mar 12 05:12:32.197743 sshd[1936]: pam_unix(sshd:session): session closed for user core Mar 12 05:12:32.213817 systemd[1]: sshd@8-10.230.44.138:22-20.161.92.111:47554.service: Deactivated successfully. Mar 12 05:12:32.235353 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Mar 12 05:12:32.236751 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 05:12:32.245855 systemd-logind[1604]: Removed session 11. Mar 12 05:12:33.216793 kubelet[2890]: I0312 05:12:33.213824 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b328216d-c3d9-4a22-9944-47a3ed2b43b0-tigera-ca-bundle\") pod \"calico-typha-6f8d7b6cc4-vz4v5\" (UID: \"b328216d-c3d9-4a22-9944-47a3ed2b43b0\") " pod="calico-system/calico-typha-6f8d7b6cc4-vz4v5" Mar 12 05:12:33.216793 kubelet[2890]: I0312 05:12:33.213982 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qknt4\" (UniqueName: \"kubernetes.io/projected/b328216d-c3d9-4a22-9944-47a3ed2b43b0-kube-api-access-qknt4\") pod \"calico-typha-6f8d7b6cc4-vz4v5\" (UID: \"b328216d-c3d9-4a22-9944-47a3ed2b43b0\") " pod="calico-system/calico-typha-6f8d7b6cc4-vz4v5" Mar 12 05:12:33.216793 kubelet[2890]: I0312 05:12:33.214034 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b328216d-c3d9-4a22-9944-47a3ed2b43b0-typha-certs\") pod \"calico-typha-6f8d7b6cc4-vz4v5\" (UID: \"b328216d-c3d9-4a22-9944-47a3ed2b43b0\") " pod="calico-system/calico-typha-6f8d7b6cc4-vz4v5" Mar 12 05:12:33.416574 kubelet[2890]: I0312 05:12:33.416523 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-lib-modules\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.416773 kubelet[2890]: I0312 05:12:33.416598 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-policysync\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.416773 kubelet[2890]: I0312 05:12:33.416643 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-xtables-lock\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.416773 kubelet[2890]: I0312 05:12:33.416678 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-bpffs\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.416773 kubelet[2890]: I0312 05:12:33.416705 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4964ad11-2459-4286-97ae-b15454ce7f1b-node-certs\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.416773 kubelet[2890]: I0312 05:12:33.416733 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4964ad11-2459-4286-97ae-b15454ce7f1b-tigera-ca-bundle\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417057 kubelet[2890]: I0312 05:12:33.416760 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-cni-bin-dir\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417057 kubelet[2890]: I0312 05:12:33.416786 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-nodeproc\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417057 kubelet[2890]: I0312 05:12:33.416811 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-var-lib-calico\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417057 kubelet[2890]: I0312 05:12:33.416835 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-var-run-calico\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417057 kubelet[2890]: I0312 05:12:33.416861 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-cni-net-dir\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417342 kubelet[2890]: I0312 05:12:33.416917 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-sys-fs\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417342 kubelet[2890]: I0312 05:12:33.416949 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-flexvol-driver-host\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417342 kubelet[2890]: I0312 05:12:33.416978 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4964ad11-2459-4286-97ae-b15454ce7f1b-cni-log-dir\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.417342 kubelet[2890]: I0312 05:12:33.417004 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmklv\" (UniqueName: \"kubernetes.io/projected/4964ad11-2459-4286-97ae-b15454ce7f1b-kube-api-access-mmklv\") pod \"calico-node-9gbjs\" (UID: \"4964ad11-2459-4286-97ae-b15454ce7f1b\") " pod="calico-system/calico-node-9gbjs" Mar 12 05:12:33.430609 kubelet[2890]: E0312 05:12:33.429690 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:33.508122 containerd[1633]: time="2026-03-12T05:12:33.507413433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8d7b6cc4-vz4v5,Uid:b328216d-c3d9-4a22-9944-47a3ed2b43b0,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:33.521445 kubelet[2890]: I0312 05:12:33.517810 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e037c65-1cc8-4c93-b094-f3ed5dbdccf3-socket-dir\") pod \"csi-node-driver-5bhzw\" (UID: \"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3\") " pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:33.521445 kubelet[2890]: I0312 05:12:33.517889 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e037c65-1cc8-4c93-b094-f3ed5dbdccf3-kubelet-dir\") pod \"csi-node-driver-5bhzw\" (UID: \"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3\") " pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:33.521445 kubelet[2890]: I0312 05:12:33.517918 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e037c65-1cc8-4c93-b094-f3ed5dbdccf3-registration-dir\") pod \"csi-node-driver-5bhzw\" (UID: \"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3\") " pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:33.521445 kubelet[2890]: I0312 05:12:33.518041 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1e037c65-1cc8-4c93-b094-f3ed5dbdccf3-varrun\") pod \"csi-node-driver-5bhzw\" (UID: \"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3\") " pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:33.521445 kubelet[2890]: I0312 05:12:33.518080 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dqs\" (UniqueName: \"kubernetes.io/projected/1e037c65-1cc8-4c93-b094-f3ed5dbdccf3-kube-api-access-q4dqs\") pod \"csi-node-driver-5bhzw\" (UID: \"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3\") " pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:33.574284 kubelet[2890]: E0312 05:12:33.573285 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.574284 kubelet[2890]: W0312 05:12:33.573328 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.575320 kubelet[2890]: E0312 05:12:33.575241 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.591556 kubelet[2890]: E0312 05:12:33.591477 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.591556 kubelet[2890]: W0312 05:12:33.591549 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.591823 kubelet[2890]: E0312 05:12:33.591589 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.620664 kubelet[2890]: E0312 05:12:33.619916 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.620664 kubelet[2890]: W0312 05:12:33.619950 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.620664 kubelet[2890]: E0312 05:12:33.619989 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.620664 kubelet[2890]: E0312 05:12:33.620260 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.620664 kubelet[2890]: W0312 05:12:33.620273 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.620664 kubelet[2890]: E0312 05:12:33.620288 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.621377 kubelet[2890]: E0312 05:12:33.621349 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.621377 kubelet[2890]: W0312 05:12:33.621373 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.621537 kubelet[2890]: E0312 05:12:33.621389 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.623260 kubelet[2890]: E0312 05:12:33.622732 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.623260 kubelet[2890]: W0312 05:12:33.622754 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.623260 kubelet[2890]: E0312 05:12:33.622781 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.623680 kubelet[2890]: E0312 05:12:33.623574 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.623680 kubelet[2890]: W0312 05:12:33.623588 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.623680 kubelet[2890]: E0312 05:12:33.623601 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.624615 kubelet[2890]: E0312 05:12:33.624307 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.624615 kubelet[2890]: W0312 05:12:33.624325 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.624615 kubelet[2890]: E0312 05:12:33.624341 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.625612 kubelet[2890]: E0312 05:12:33.625048 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.625612 kubelet[2890]: W0312 05:12:33.625061 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.625612 kubelet[2890]: E0312 05:12:33.625075 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.625612 kubelet[2890]: E0312 05:12:33.625370 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.625612 kubelet[2890]: W0312 05:12:33.625382 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.625612 kubelet[2890]: E0312 05:12:33.625396 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.625707 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.626688 kubelet[2890]: W0312 05:12:33.625721 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.625734 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.625988 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.626688 kubelet[2890]: W0312 05:12:33.626002 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.626015 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.626259 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.626688 kubelet[2890]: W0312 05:12:33.626272 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.626286 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.626688 kubelet[2890]: E0312 05:12:33.626600 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.628474 kubelet[2890]: W0312 05:12:33.626613 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.626627 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.626854 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.628474 kubelet[2890]: W0312 05:12:33.626866 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.626880 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.627106 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.628474 kubelet[2890]: W0312 05:12:33.627119 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.627131 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.628474 kubelet[2890]: E0312 05:12:33.627402 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.628474 kubelet[2890]: W0312 05:12:33.627418 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.627431 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.627706 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.629705 kubelet[2890]: W0312 05:12:33.627718 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.627732 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.629093 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.629705 kubelet[2890]: W0312 05:12:33.629107 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.629122 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.629577 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.629705 kubelet[2890]: W0312 05:12:33.629592 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.629705 kubelet[2890]: E0312 05:12:33.629606 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.630330 kubelet[2890]: E0312 05:12:33.630216 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.630330 kubelet[2890]: W0312 05:12:33.630240 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.630330 kubelet[2890]: E0312 05:12:33.630255 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.631070 kubelet[2890]: E0312 05:12:33.630751 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.631070 kubelet[2890]: W0312 05:12:33.630793 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.631070 kubelet[2890]: E0312 05:12:33.630807 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.631354 kubelet[2890]: E0312 05:12:33.631330 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.631558 kubelet[2890]: W0312 05:12:33.631382 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.631558 kubelet[2890]: E0312 05:12:33.631403 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.632050 kubelet[2890]: E0312 05:12:33.631909 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.632050 kubelet[2890]: W0312 05:12:33.631931 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.632050 kubelet[2890]: E0312 05:12:33.631947 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.632587 kubelet[2890]: E0312 05:12:33.632426 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.632587 kubelet[2890]: W0312 05:12:33.632448 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.632587 kubelet[2890]: E0312 05:12:33.632464 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.635108 kubelet[2890]: E0312 05:12:33.633007 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.635108 kubelet[2890]: W0312 05:12:33.633057 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.635108 kubelet[2890]: E0312 05:12:33.633074 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.635108 kubelet[2890]: E0312 05:12:33.633594 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.635108 kubelet[2890]: W0312 05:12:33.633664 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.635108 kubelet[2890]: E0312 05:12:33.633685 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.641936 containerd[1633]: time="2026-03-12T05:12:33.641664233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:33.641936 containerd[1633]: time="2026-03-12T05:12:33.641801889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:33.641936 containerd[1633]: time="2026-03-12T05:12:33.641825360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:33.642734 containerd[1633]: time="2026-03-12T05:12:33.642492385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:33.654691 kubelet[2890]: E0312 05:12:33.654591 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:33.654691 kubelet[2890]: W0312 05:12:33.654618 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:33.654691 kubelet[2890]: E0312 05:12:33.654643 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:33.672357 containerd[1633]: time="2026-03-12T05:12:33.671868732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gbjs,Uid:4964ad11-2459-4286-97ae-b15454ce7f1b,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:33.750568 containerd[1633]: time="2026-03-12T05:12:33.745476630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:12:33.750568 containerd[1633]: time="2026-03-12T05:12:33.748212774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:12:33.750568 containerd[1633]: time="2026-03-12T05:12:33.748397629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:33.750568 containerd[1633]: time="2026-03-12T05:12:33.748798176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:12:33.789801 containerd[1633]: time="2026-03-12T05:12:33.788274425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8d7b6cc4-vz4v5,Uid:b328216d-c3d9-4a22-9944-47a3ed2b43b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"477337ac9db08f1f61f36ec9c36f48fff3043b9e1ef2ccdf3bbc1fcf419944e7\"" Mar 12 05:12:33.805732 containerd[1633]: time="2026-03-12T05:12:33.805638037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 05:12:33.833039 containerd[1633]: time="2026-03-12T05:12:33.832973199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gbjs,Uid:4964ad11-2459-4286-97ae-b15454ce7f1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\"" Mar 12 05:12:34.678096 kubelet[2890]: E0312 05:12:34.677462 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:35.410237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785300136.mount: Deactivated successfully. Mar 12 05:12:36.678370 kubelet[2890]: E0312 05:12:36.678117 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:37.182109 containerd[1633]: time="2026-03-12T05:12:37.181935437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:37.183781 containerd[1633]: time="2026-03-12T05:12:37.183680549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 12 05:12:37.183781 containerd[1633]: time="2026-03-12T05:12:37.183714364Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:37.189946 containerd[1633]: time="2026-03-12T05:12:37.189873684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:37.190972 containerd[1633]: time="2026-03-12T05:12:37.190938410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.38523584s" Mar 12 05:12:37.191232 containerd[1633]: time="2026-03-12T05:12:37.191086206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 05:12:37.192663 containerd[1633]: time="2026-03-12T05:12:37.192426530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 05:12:37.243972 containerd[1633]: time="2026-03-12T05:12:37.243854095Z" level=info msg="CreateContainer within sandbox \"477337ac9db08f1f61f36ec9c36f48fff3043b9e1ef2ccdf3bbc1fcf419944e7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 05:12:37.273750 containerd[1633]: time="2026-03-12T05:12:37.273693695Z" level=info msg="CreateContainer within sandbox \"477337ac9db08f1f61f36ec9c36f48fff3043b9e1ef2ccdf3bbc1fcf419944e7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c7e5e9bf3694617dd499a970429c5a28ce23f931449057fd2a727fca0b82cbf\"" Mar 12 05:12:37.275600 containerd[1633]: time="2026-03-12T05:12:37.275559144Z" level=info msg="StartContainer for \"3c7e5e9bf3694617dd499a970429c5a28ce23f931449057fd2a727fca0b82cbf\"" Mar 12 05:12:37.388243 containerd[1633]: time="2026-03-12T05:12:37.388189915Z" level=info msg="StartContainer for \"3c7e5e9bf3694617dd499a970429c5a28ce23f931449057fd2a727fca0b82cbf\" returns successfully" Mar 12 05:12:37.882471 kubelet[2890]: I0312 05:12:37.880702 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f8d7b6cc4-vz4v5" podStartSLOduration=1.49258622 podStartE2EDuration="4.880660785s" podCreationTimestamp="2026-03-12 05:12:33 +0000 UTC" firstStartedPulling="2026-03-12 05:12:33.804194967 +0000 UTC m=+20.313918175" lastFinishedPulling="2026-03-12 05:12:37.192269521 +0000 UTC m=+23.701992740" observedRunningTime="2026-03-12 05:12:37.87981866 +0000 UTC m=+24.389541894" watchObservedRunningTime="2026-03-12 05:12:37.880660785 +0000 UTC m=+24.390384006" Mar 12 05:12:37.931236 kubelet[2890]: E0312 05:12:37.931177 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.931635 kubelet[2890]: W0312 05:12:37.931293 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.931635 kubelet[2890]: E0312 05:12:37.931331 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.932752 kubelet[2890]: E0312 05:12:37.932733 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.933064 kubelet[2890]: W0312 05:12:37.932817 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.933064 kubelet[2890]: E0312 05:12:37.932844 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.933940 kubelet[2890]: E0312 05:12:37.933922 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.934080 kubelet[2890]: W0312 05:12:37.934060 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.934249 kubelet[2890]: E0312 05:12:37.934153 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.934783 kubelet[2890]: E0312 05:12:37.934662 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.934783 kubelet[2890]: W0312 05:12:37.934680 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.934783 kubelet[2890]: E0312 05:12:37.934714 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.935784 kubelet[2890]: E0312 05:12:37.935598 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.935784 kubelet[2890]: W0312 05:12:37.935621 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.935784 kubelet[2890]: E0312 05:12:37.935637 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.936179 kubelet[2890]: E0312 05:12:37.936040 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.936364 kubelet[2890]: W0312 05:12:37.936302 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.936364 kubelet[2890]: E0312 05:12:37.936327 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.939034 kubelet[2890]: E0312 05:12:37.938982 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.939034 kubelet[2890]: W0312 05:12:37.939018 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.939677 kubelet[2890]: E0312 05:12:37.939038 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.939677 kubelet[2890]: E0312 05:12:37.939424 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.939677 kubelet[2890]: W0312 05:12:37.939438 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.939677 kubelet[2890]: E0312 05:12:37.939452 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.940115 kubelet[2890]: E0312 05:12:37.940093 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.940181 kubelet[2890]: W0312 05:12:37.940132 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.940181 kubelet[2890]: E0312 05:12:37.940151 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.940462 kubelet[2890]: E0312 05:12:37.940444 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.940560 kubelet[2890]: W0312 05:12:37.940462 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.940560 kubelet[2890]: E0312 05:12:37.940495 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.940822 kubelet[2890]: E0312 05:12:37.940805 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.940887 kubelet[2890]: W0312 05:12:37.940822 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.940887 kubelet[2890]: E0312 05:12:37.940855 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.941199 kubelet[2890]: E0312 05:12:37.941180 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.941267 kubelet[2890]: W0312 05:12:37.941208 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.941267 kubelet[2890]: E0312 05:12:37.941224 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.941512 kubelet[2890]: E0312 05:12:37.941483 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.941584 kubelet[2890]: W0312 05:12:37.941501 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.941584 kubelet[2890]: E0312 05:12:37.941545 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.941857 kubelet[2890]: E0312 05:12:37.941824 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.941928 kubelet[2890]: W0312 05:12:37.941857 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.941928 kubelet[2890]: E0312 05:12:37.941873 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.942181 kubelet[2890]: E0312 05:12:37.942164 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.942246 kubelet[2890]: W0312 05:12:37.942181 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.942246 kubelet[2890]: E0312 05:12:37.942213 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.959113 kubelet[2890]: E0312 05:12:37.958927 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.959113 kubelet[2890]: W0312 05:12:37.958962 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.959113 kubelet[2890]: E0312 05:12:37.958990 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.959640 kubelet[2890]: E0312 05:12:37.959301 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.959640 kubelet[2890]: W0312 05:12:37.959315 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.959640 kubelet[2890]: E0312 05:12:37.959329 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.960031 kubelet[2890]: E0312 05:12:37.959866 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.960031 kubelet[2890]: W0312 05:12:37.959888 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.960031 kubelet[2890]: E0312 05:12:37.959904 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.960472 kubelet[2890]: E0312 05:12:37.960454 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.960716 kubelet[2890]: W0312 05:12:37.960565 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.960716 kubelet[2890]: E0312 05:12:37.960589 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.960954 kubelet[2890]: E0312 05:12:37.960937 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.961155 kubelet[2890]: W0312 05:12:37.961059 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.961155 kubelet[2890]: E0312 05:12:37.961084 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.961668 kubelet[2890]: E0312 05:12:37.961488 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.961668 kubelet[2890]: W0312 05:12:37.961521 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.961668 kubelet[2890]: E0312 05:12:37.961538 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.962109 kubelet[2890]: E0312 05:12:37.961912 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.962109 kubelet[2890]: W0312 05:12:37.961928 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.962109 kubelet[2890]: E0312 05:12:37.961943 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.962985 kubelet[2890]: E0312 05:12:37.962633 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.962985 kubelet[2890]: W0312 05:12:37.962653 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.962985 kubelet[2890]: E0312 05:12:37.962678 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.963228 kubelet[2890]: E0312 05:12:37.963213 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.963295 kubelet[2890]: W0312 05:12:37.963228 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.963295 kubelet[2890]: E0312 05:12:37.963244 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.963664 kubelet[2890]: E0312 05:12:37.963640 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.963737 kubelet[2890]: W0312 05:12:37.963663 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.963737 kubelet[2890]: E0312 05:12:37.963680 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.964025 kubelet[2890]: E0312 05:12:37.963994 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.964025 kubelet[2890]: W0312 05:12:37.964023 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.964135 kubelet[2890]: E0312 05:12:37.964040 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.964344 kubelet[2890]: E0312 05:12:37.964326 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.964407 kubelet[2890]: W0312 05:12:37.964345 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.964407 kubelet[2890]: E0312 05:12:37.964359 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.965084 kubelet[2890]: E0312 05:12:37.964914 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.965084 kubelet[2890]: W0312 05:12:37.964933 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.965084 kubelet[2890]: E0312 05:12:37.964948 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.965367 kubelet[2890]: E0312 05:12:37.965314 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.965367 kubelet[2890]: W0312 05:12:37.965331 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.965367 kubelet[2890]: E0312 05:12:37.965346 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.966144 kubelet[2890]: E0312 05:12:37.965839 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.966144 kubelet[2890]: W0312 05:12:37.965874 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.966144 kubelet[2890]: E0312 05:12:37.965891 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.966428 kubelet[2890]: E0312 05:12:37.966409 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.966573 kubelet[2890]: W0312 05:12:37.966552 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.966692 kubelet[2890]: E0312 05:12:37.966672 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.967181 kubelet[2890]: E0312 05:12:37.967160 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.967181 kubelet[2890]: W0312 05:12:37.967180 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.967305 kubelet[2890]: E0312 05:12:37.967196 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:37.967546 kubelet[2890]: E0312 05:12:37.967498 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:37.967609 kubelet[2890]: W0312 05:12:37.967548 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:37.967609 kubelet[2890]: E0312 05:12:37.967564 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.678427 kubelet[2890]: E0312 05:12:38.678303 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:38.808605 containerd[1633]: time="2026-03-12T05:12:38.807498777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:38.813133 containerd[1633]: time="2026-03-12T05:12:38.812762987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 12 05:12:38.819567 containerd[1633]: time="2026-03-12T05:12:38.819303220Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:38.823535 containerd[1633]: time="2026-03-12T05:12:38.823005579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:38.824208 containerd[1633]: time="2026-03-12T05:12:38.824168732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.63168107s" Mar 12 05:12:38.824289 containerd[1633]: time="2026-03-12T05:12:38.824213418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 05:12:38.832723 containerd[1633]: time="2026-03-12T05:12:38.832386903Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 05:12:38.838617 kubelet[2890]: I0312 05:12:38.838392 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:12:38.851121 kubelet[2890]: E0312 05:12:38.849920 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.851121 kubelet[2890]: W0312 05:12:38.849955 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.851121 kubelet[2890]: E0312 05:12:38.849990 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.851403 kubelet[2890]: E0312 05:12:38.851165 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.851403 kubelet[2890]: W0312 05:12:38.851182 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.851403 kubelet[2890]: E0312 05:12:38.851198 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852007 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.852895 kubelet[2890]: W0312 05:12:38.852035 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852052 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852359 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.852895 kubelet[2890]: W0312 05:12:38.852373 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852406 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852775 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.852895 kubelet[2890]: W0312 05:12:38.852827 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.852895 kubelet[2890]: E0312 05:12:38.852855 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.853988 kubelet[2890]: E0312 05:12:38.853601 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.853988 kubelet[2890]: W0312 05:12:38.853618 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.853988 kubelet[2890]: E0312 05:12:38.853644 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.853988 kubelet[2890]: E0312 05:12:38.853951 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.854365 kubelet[2890]: W0312 05:12:38.853970 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.854365 kubelet[2890]: E0312 05:12:38.854230 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.854819 kubelet[2890]: E0312 05:12:38.854648 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.854819 kubelet[2890]: W0312 05:12:38.854665 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.854819 kubelet[2890]: E0312 05:12:38.854679 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.855290 kubelet[2890]: E0312 05:12:38.855170 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.855290 kubelet[2890]: W0312 05:12:38.855187 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.855290 kubelet[2890]: E0312 05:12:38.855219 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.855794 kubelet[2890]: E0312 05:12:38.855674 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.855794 kubelet[2890]: W0312 05:12:38.855690 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.855794 kubelet[2890]: E0312 05:12:38.855722 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.856407 kubelet[2890]: E0312 05:12:38.856211 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.856407 kubelet[2890]: W0312 05:12:38.856227 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.856407 kubelet[2890]: E0312 05:12:38.856261 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.856913 kubelet[2890]: E0312 05:12:38.856765 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.856913 kubelet[2890]: W0312 05:12:38.856782 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.856913 kubelet[2890]: E0312 05:12:38.856806 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.857380 kubelet[2890]: E0312 05:12:38.857237 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.857380 kubelet[2890]: W0312 05:12:38.857254 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.857380 kubelet[2890]: E0312 05:12:38.857286 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.857880 kubelet[2890]: E0312 05:12:38.857738 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.857880 kubelet[2890]: W0312 05:12:38.857755 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.857880 kubelet[2890]: E0312 05:12:38.857770 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.858362 kubelet[2890]: E0312 05:12:38.858261 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.858362 kubelet[2890]: W0312 05:12:38.858279 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.858362 kubelet[2890]: E0312 05:12:38.858293 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.870861 kubelet[2890]: E0312 05:12:38.870819 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.870861 kubelet[2890]: W0312 05:12:38.870852 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.871125 kubelet[2890]: E0312 05:12:38.870888 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.872467 kubelet[2890]: E0312 05:12:38.871768 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.872467 kubelet[2890]: W0312 05:12:38.871850 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.872467 kubelet[2890]: E0312 05:12:38.871870 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.873436 kubelet[2890]: E0312 05:12:38.873385 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.873436 kubelet[2890]: W0312 05:12:38.873410 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.873436 kubelet[2890]: E0312 05:12:38.873428 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.873840 kubelet[2890]: E0312 05:12:38.873809 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.873840 kubelet[2890]: W0312 05:12:38.873832 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.873946 kubelet[2890]: E0312 05:12:38.873848 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.874184 kubelet[2890]: E0312 05:12:38.874153 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.874184 kubelet[2890]: W0312 05:12:38.874176 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.874298 kubelet[2890]: E0312 05:12:38.874191 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.874556 kubelet[2890]: E0312 05:12:38.874495 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.874556 kubelet[2890]: W0312 05:12:38.874537 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.874669 kubelet[2890]: E0312 05:12:38.874561 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.874905 kubelet[2890]: E0312 05:12:38.874879 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.874905 kubelet[2890]: W0312 05:12:38.874898 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.875167 kubelet[2890]: E0312 05:12:38.874913 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.875297 kubelet[2890]: E0312 05:12:38.875277 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.875297 kubelet[2890]: W0312 05:12:38.875296 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.875420 kubelet[2890]: E0312 05:12:38.875310 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.875673 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.893080 kubelet[2890]: W0312 05:12:38.875687 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.875704 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.876221 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.893080 kubelet[2890]: W0312 05:12:38.876237 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.876252 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.876565 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.893080 kubelet[2890]: W0312 05:12:38.876578 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.876592 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.893080 kubelet[2890]: E0312 05:12:38.876851 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894190 kubelet[2890]: W0312 05:12:38.876865 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.876878 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.877142 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894190 kubelet[2890]: W0312 05:12:38.877155 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.877168 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.877445 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894190 kubelet[2890]: W0312 05:12:38.877458 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.877479 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894190 kubelet[2890]: E0312 05:12:38.878258 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894190 kubelet[2890]: W0312 05:12:38.878298 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.878315 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.878631 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894901 kubelet[2890]: W0312 05:12:38.878645 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.878659 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.879063 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894901 kubelet[2890]: W0312 05:12:38.879078 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.879093 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.880782 2890 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 05:12:38.894901 kubelet[2890]: W0312 05:12:38.880796 2890 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 05:12:38.894901 kubelet[2890]: E0312 05:12:38.880814 2890 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 05:12:38.932232 containerd[1633]: time="2026-03-12T05:12:38.932031135Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536\"" Mar 12 05:12:38.935283 containerd[1633]: time="2026-03-12T05:12:38.935144025Z" level=info msg="StartContainer for \"6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536\"" Mar 12 05:12:39.051165 containerd[1633]: time="2026-03-12T05:12:39.051032730Z" level=info msg="StartContainer for \"6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536\" returns successfully" Mar 12 05:12:39.171886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536-rootfs.mount: Deactivated successfully. Mar 12 05:12:39.203000 containerd[1633]: time="2026-03-12T05:12:39.177064152Z" level=info msg="shim disconnected" id=6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536 namespace=k8s.io Mar 12 05:12:39.203000 containerd[1633]: time="2026-03-12T05:12:39.200686453Z" level=warning msg="cleaning up after shim disconnected" id=6dd990553821c85b8b1d03150a371abb4ba5cbf93d85d09a949f424551d1d536 namespace=k8s.io Mar 12 05:12:39.203000 containerd[1633]: time="2026-03-12T05:12:39.200726445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:12:39.844663 containerd[1633]: time="2026-03-12T05:12:39.844604405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 05:12:40.677943 kubelet[2890]: E0312 05:12:40.677584 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:42.681584 kubelet[2890]: E0312 05:12:42.680920 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:44.678064 kubelet[2890]: E0312 05:12:44.677997 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:46.678060 kubelet[2890]: E0312 05:12:46.677941 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:48.679012 kubelet[2890]: E0312 05:12:48.678438 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:50.016696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131682968.mount: Deactivated successfully. Mar 12 05:12:50.073445 containerd[1633]: time="2026-03-12T05:12:50.070857035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 05:12:50.074695 containerd[1633]: time="2026-03-12T05:12:50.065347040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:50.075859 containerd[1633]: time="2026-03-12T05:12:50.075825685Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:50.081236 containerd[1633]: time="2026-03-12T05:12:50.081141422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:50.082248 containerd[1633]: time="2026-03-12T05:12:50.082065535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.237384976s" Mar 12 05:12:50.082248 containerd[1633]: time="2026-03-12T05:12:50.082121161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 05:12:50.133689 containerd[1633]: time="2026-03-12T05:12:50.133521676Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 05:12:50.171066 containerd[1633]: time="2026-03-12T05:12:50.170881027Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72\"" Mar 12 05:12:50.171896 containerd[1633]: time="2026-03-12T05:12:50.171828374Z" level=info msg="StartContainer for \"91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72\"" Mar 12 05:12:50.307749 containerd[1633]: time="2026-03-12T05:12:50.307304303Z" level=info msg="StartContainer for \"91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72\" returns successfully" Mar 12 05:12:50.525486 containerd[1633]: time="2026-03-12T05:12:50.525384647Z" level=info msg="shim disconnected" id=91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72 namespace=k8s.io Mar 12 05:12:50.526097 containerd[1633]: time="2026-03-12T05:12:50.525816787Z" level=warning msg="cleaning up after shim disconnected" id=91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72 namespace=k8s.io Mar 12 05:12:50.526097 containerd[1633]: time="2026-03-12T05:12:50.525845711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:12:50.543938 containerd[1633]: time="2026-03-12T05:12:50.543820813Z" level=warning msg="cleanup warnings time=\"2026-03-12T05:12:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 05:12:50.678753 kubelet[2890]: E0312 05:12:50.678069 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:50.904354 containerd[1633]: time="2026-03-12T05:12:50.904202038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 05:12:51.015955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91e2c092822dcf74874a737956bbb277f7093f117388ae913c8f547e1bd8fb72-rootfs.mount: Deactivated successfully. Mar 12 05:12:51.357193 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:12:51.360202 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:12:51.357289 systemd-resolved[1514]: Flushed all caches. Mar 12 05:12:52.361799 kubelet[2890]: I0312 05:12:52.361175 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:12:52.678532 kubelet[2890]: E0312 05:12:52.677960 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:54.678183 kubelet[2890]: E0312 05:12:54.678106 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:55.680101 containerd[1633]: time="2026-03-12T05:12:55.679973930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:55.683590 containerd[1633]: time="2026-03-12T05:12:55.683545449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 05:12:55.684689 containerd[1633]: time="2026-03-12T05:12:55.684573020Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:55.690522 containerd[1633]: time="2026-03-12T05:12:55.689461665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:12:55.690670 containerd[1633]: time="2026-03-12T05:12:55.690585053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.786333532s" Mar 12 05:12:55.690670 containerd[1633]: time="2026-03-12T05:12:55.690624803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 05:12:55.698125 containerd[1633]: time="2026-03-12T05:12:55.698083995Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 05:12:55.753689 containerd[1633]: time="2026-03-12T05:12:55.753596152Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade\"" Mar 12 05:12:55.759881 containerd[1633]: time="2026-03-12T05:12:55.755385694Z" level=info msg="StartContainer for \"739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade\"" Mar 12 05:12:55.866532 containerd[1633]: time="2026-03-12T05:12:55.866409186Z" level=info msg="StartContainer for \"739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade\" returns successfully" Mar 12 05:12:56.678702 kubelet[2890]: E0312 05:12:56.678173 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:57.077333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade-rootfs.mount: Deactivated successfully. Mar 12 05:12:57.097654 containerd[1633]: time="2026-03-12T05:12:57.095839069Z" level=info msg="shim disconnected" id=739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade namespace=k8s.io Mar 12 05:12:57.097654 containerd[1633]: time="2026-03-12T05:12:57.096148398Z" level=warning msg="cleaning up after shim disconnected" id=739843a56ee80d7b29a0f3059381b0b8fbe5e2386e9008c3e251129f44556ade namespace=k8s.io Mar 12 05:12:57.097654 containerd[1633]: time="2026-03-12T05:12:57.096199883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:12:57.113262 kubelet[2890]: I0312 05:12:57.110901 2890 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 05:12:57.375433 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:12:57.372691 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:12:57.372745 systemd-resolved[1514]: Flushed all caches. Mar 12 05:12:57.431462 kubelet[2890]: I0312 05:12:57.430833 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cv5z\" (UniqueName: \"kubernetes.io/projected/eee9e053-fb8a-4138-a624-424d81f26460-kube-api-access-4cv5z\") pod \"coredns-674b8bbfcf-k855v\" (UID: \"eee9e053-fb8a-4138-a624-424d81f26460\") " pod="kube-system/coredns-674b8bbfcf-k855v" Mar 12 05:12:57.431462 kubelet[2890]: I0312 05:12:57.430923 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9fp\" (UniqueName: \"kubernetes.io/projected/78d0621d-bd85-4055-838b-0df73189181d-kube-api-access-rr9fp\") pod \"whisker-f6dd97758-ssf6z\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:57.431462 kubelet[2890]: I0312 05:12:57.430958 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4pp\" (UniqueName: \"kubernetes.io/projected/e3733251-e6b4-40f8-bb40-330bae9490b4-kube-api-access-zw4pp\") pod \"goldmane-5b85766d88-wp76k\" (UID: \"e3733251-e6b4-40f8-bb40-330bae9490b4\") " pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:57.431462 kubelet[2890]: I0312 05:12:57.430998 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04ffee6a-8064-4cfa-b109-0aa108677fba-calico-apiserver-certs\") pod \"calico-apiserver-d9476cfd5-4m7gh\" (UID: \"04ffee6a-8064-4cfa-b109-0aa108677fba\") " pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" Mar 12 05:12:57.431462 kubelet[2890]: I0312 05:12:57.431022 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfbr\" (UniqueName: \"kubernetes.io/projected/bc68d85e-8701-44b5-913d-95c4533a5538-kube-api-access-zlfbr\") pod \"coredns-674b8bbfcf-ckbxf\" (UID: \"bc68d85e-8701-44b5-913d-95c4533a5538\") " pod="kube-system/coredns-674b8bbfcf-ckbxf" Mar 12 05:12:57.431867 kubelet[2890]: I0312 05:12:57.431071 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78d0621d-bd85-4055-838b-0df73189181d-whisker-backend-key-pair\") pod \"whisker-f6dd97758-ssf6z\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:57.431867 kubelet[2890]: I0312 05:12:57.431098 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-whisker-ca-bundle\") pod \"whisker-f6dd97758-ssf6z\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:57.431867 kubelet[2890]: I0312 05:12:57.431153 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee9e053-fb8a-4138-a624-424d81f26460-config-volume\") pod \"coredns-674b8bbfcf-k855v\" (UID: \"eee9e053-fb8a-4138-a624-424d81f26460\") " pod="kube-system/coredns-674b8bbfcf-k855v" Mar 12 05:12:57.431867 kubelet[2890]: I0312 05:12:57.431208 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxlt\" (UniqueName: \"kubernetes.io/projected/04ffee6a-8064-4cfa-b109-0aa108677fba-kube-api-access-flxlt\") pod \"calico-apiserver-d9476cfd5-4m7gh\" (UID: \"04ffee6a-8064-4cfa-b109-0aa108677fba\") " pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" Mar 12 05:12:57.431867 kubelet[2890]: I0312 05:12:57.431237 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc68d85e-8701-44b5-913d-95c4533a5538-config-volume\") pod \"coredns-674b8bbfcf-ckbxf\" (UID: \"bc68d85e-8701-44b5-913d-95c4533a5538\") " pod="kube-system/coredns-674b8bbfcf-ckbxf" Mar 12 05:12:57.432141 kubelet[2890]: I0312 05:12:57.431276 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3733251-e6b4-40f8-bb40-330bae9490b4-config\") pod \"goldmane-5b85766d88-wp76k\" (UID: \"e3733251-e6b4-40f8-bb40-330bae9490b4\") " pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:57.432141 kubelet[2890]: I0312 05:12:57.431312 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e3733251-e6b4-40f8-bb40-330bae9490b4-goldmane-key-pair\") pod \"goldmane-5b85766d88-wp76k\" (UID: \"e3733251-e6b4-40f8-bb40-330bae9490b4\") " pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:57.432141 kubelet[2890]: I0312 05:12:57.431343 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-nginx-config\") pod \"whisker-f6dd97758-ssf6z\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:57.432141 kubelet[2890]: I0312 05:12:57.431378 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3733251-e6b4-40f8-bb40-330bae9490b4-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-wp76k\" (UID: \"e3733251-e6b4-40f8-bb40-330bae9490b4\") " pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:57.532689 kubelet[2890]: I0312 05:12:57.532623 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zcg\" (UniqueName: \"kubernetes.io/projected/2bd6d5d1-effa-4aa5-8222-37da12221ee2-kube-api-access-g5zcg\") pod \"calico-kube-controllers-74f6db979d-5bgk7\" (UID: \"2bd6d5d1-effa-4aa5-8222-37da12221ee2\") " pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" Mar 12 05:12:57.533097 kubelet[2890]: I0312 05:12:57.533068 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd6d5d1-effa-4aa5-8222-37da12221ee2-tigera-ca-bundle\") pod \"calico-kube-controllers-74f6db979d-5bgk7\" (UID: \"2bd6d5d1-effa-4aa5-8222-37da12221ee2\") " pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" Mar 12 05:12:57.533360 kubelet[2890]: I0312 05:12:57.533336 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14e6b323-cc02-482c-813c-8ef2159483f9-calico-apiserver-certs\") pod \"calico-apiserver-d9476cfd5-frq46\" (UID: \"14e6b323-cc02-482c-813c-8ef2159483f9\") " pod="calico-system/calico-apiserver-d9476cfd5-frq46" Mar 12 05:12:57.533558 kubelet[2890]: I0312 05:12:57.533524 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmhx\" (UniqueName: \"kubernetes.io/projected/14e6b323-cc02-482c-813c-8ef2159483f9-kube-api-access-5bmhx\") pod \"calico-apiserver-d9476cfd5-frq46\" (UID: \"14e6b323-cc02-482c-813c-8ef2159483f9\") " pod="calico-system/calico-apiserver-d9476cfd5-frq46" Mar 12 05:12:57.632066 containerd[1633]: time="2026-03-12T05:12:57.630977245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wp76k,Uid:e3733251-e6b4-40f8-bb40-330bae9490b4,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:57.632855 containerd[1633]: time="2026-03-12T05:12:57.631877957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6dd97758-ssf6z,Uid:78d0621d-bd85-4055-838b-0df73189181d,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:57.641714 containerd[1633]: time="2026-03-12T05:12:57.639375158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-4m7gh,Uid:04ffee6a-8064-4cfa-b109-0aa108677fba,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:57.662595 containerd[1633]: time="2026-03-12T05:12:57.662284760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckbxf,Uid:bc68d85e-8701-44b5-913d-95c4533a5538,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:57.894496 containerd[1633]: time="2026-03-12T05:12:57.893969057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k855v,Uid:eee9e053-fb8a-4138-a624-424d81f26460,Namespace:kube-system,Attempt:0,}" Mar 12 05:12:57.953568 containerd[1633]: time="2026-03-12T05:12:57.953219693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-frq46,Uid:14e6b323-cc02-482c-813c-8ef2159483f9,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:57.978602 containerd[1633]: time="2026-03-12T05:12:57.978147194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f6db979d-5bgk7,Uid:2bd6d5d1-effa-4aa5-8222-37da12221ee2,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:57.985169 containerd[1633]: time="2026-03-12T05:12:57.984557733Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 05:12:58.062540 containerd[1633]: time="2026-03-12T05:12:58.062419570Z" level=info msg="CreateContainer within sandbox \"020133a675b5a2980992e6795ceaecdac8b1938f581e5fabeadd436cf2d05e1a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"64ec4f253ac53ed636a4f0f16b09453535b83b1623d14b3c84fcf249801df4e2\"" Mar 12 05:12:58.065975 containerd[1633]: time="2026-03-12T05:12:58.065652501Z" level=info msg="StartContainer for \"64ec4f253ac53ed636a4f0f16b09453535b83b1623d14b3c84fcf249801df4e2\"" Mar 12 05:12:58.361401 containerd[1633]: time="2026-03-12T05:12:58.360133576Z" level=error msg="Failed to destroy network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.365542 containerd[1633]: time="2026-03-12T05:12:58.365079563Z" level=error msg="encountered an error cleaning up failed sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.367278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a-shm.mount: Deactivated successfully. Mar 12 05:12:58.401428 containerd[1633]: time="2026-03-12T05:12:58.401351541Z" level=error msg="Failed to destroy network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.405527 containerd[1633]: time="2026-03-12T05:12:58.404914026Z" level=error msg="encountered an error cleaning up failed sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.409707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787-shm.mount: Deactivated successfully. Mar 12 05:12:58.411420 containerd[1633]: time="2026-03-12T05:12:58.409686907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wp76k,Uid:e3733251-e6b4-40f8-bb40-330bae9490b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.432396 containerd[1633]: time="2026-03-12T05:12:58.432325015Z" level=error msg="Failed to destroy network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.434127 containerd[1633]: time="2026-03-12T05:12:58.434080370Z" level=error msg="encountered an error cleaning up failed sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.434218 containerd[1633]: time="2026-03-12T05:12:58.434155917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-4m7gh,Uid:04ffee6a-8064-4cfa-b109-0aa108677fba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.437390 containerd[1633]: time="2026-03-12T05:12:58.437345148Z" level=error msg="Failed to destroy network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.437875 containerd[1633]: time="2026-03-12T05:12:58.437839495Z" level=error msg="encountered an error cleaning up failed sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.437964 containerd[1633]: time="2026-03-12T05:12:58.437894171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6dd97758-ssf6z,Uid:78d0621d-bd85-4055-838b-0df73189181d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.438190 containerd[1633]: time="2026-03-12T05:12:58.437964478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckbxf,Uid:bc68d85e-8701-44b5-913d-95c4533a5538,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.439220 kubelet[2890]: E0312 05:12:58.439148 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.440708 kubelet[2890]: E0312 05:12:58.439753 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.440708 kubelet[2890]: E0312 05:12:58.440156 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:58.441002 kubelet[2890]: E0312 05:12:58.440714 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ckbxf" Mar 12 05:12:58.445539 kubelet[2890]: E0312 05:12:58.444589 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ckbxf" Mar 12 05:12:58.445539 kubelet[2890]: E0312 05:12:58.444759 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ckbxf_kube-system(bc68d85e-8701-44b5-913d-95c4533a5538)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ckbxf_kube-system(bc68d85e-8701-44b5-913d-95c4533a5538)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ckbxf" podUID="bc68d85e-8701-44b5-913d-95c4533a5538" Mar 12 05:12:58.445539 kubelet[2890]: E0312 05:12:58.445000 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.445865 kubelet[2890]: E0312 05:12:58.445048 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" Mar 12 05:12:58.445865 kubelet[2890]: E0312 05:12:58.445087 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" Mar 12 05:12:58.445865 kubelet[2890]: E0312 05:12:58.445129 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9476cfd5-4m7gh_calico-system(04ffee6a-8064-4cfa-b109-0aa108677fba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9476cfd5-4m7gh_calico-system(04ffee6a-8064-4cfa-b109-0aa108677fba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" podUID="04ffee6a-8064-4cfa-b109-0aa108677fba" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.445654691Z" level=error msg="Failed to destroy network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.446832456Z" level=error msg="encountered an error cleaning up failed sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.446916454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f6db979d-5bgk7,Uid:2bd6d5d1-effa-4aa5-8222-37da12221ee2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.447424552Z" level=error msg="Failed to destroy network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.448186808Z" level=error msg="encountered an error cleaning up failed sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449533 containerd[1633]: time="2026-03-12T05:12:58.448294892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k855v,Uid:eee9e053-fb8a-4138-a624-424d81f26460,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449906 kubelet[2890]: E0312 05:12:58.445187 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.449906 kubelet[2890]: E0312 05:12:58.445219 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:58.449906 kubelet[2890]: E0312 05:12:58.445241 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f6dd97758-ssf6z" Mar 12 05:12:58.450060 kubelet[2890]: E0312 05:12:58.445290 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f6dd97758-ssf6z_calico-system(78d0621d-bd85-4055-838b-0df73189181d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f6dd97758-ssf6z_calico-system(78d0621d-bd85-4055-838b-0df73189181d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f6dd97758-ssf6z" podUID="78d0621d-bd85-4055-838b-0df73189181d" Mar 12 05:12:58.450060 kubelet[2890]: E0312 05:12:58.445805 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-wp76k" Mar 12 05:12:58.450060 kubelet[2890]: E0312 05:12:58.445866 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-wp76k_calico-system(e3733251-e6b4-40f8-bb40-330bae9490b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-wp76k_calico-system(e3733251-e6b4-40f8-bb40-330bae9490b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-wp76k" podUID="e3733251-e6b4-40f8-bb40-330bae9490b4" Mar 12 05:12:58.450300 kubelet[2890]: E0312 05:12:58.447603 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.450300 kubelet[2890]: E0312 05:12:58.447649 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" Mar 12 05:12:58.450300 kubelet[2890]: E0312 05:12:58.447673 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" Mar 12 05:12:58.452119 kubelet[2890]: E0312 05:12:58.447730 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74f6db979d-5bgk7_calico-system(2bd6d5d1-effa-4aa5-8222-37da12221ee2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74f6db979d-5bgk7_calico-system(2bd6d5d1-effa-4aa5-8222-37da12221ee2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" podUID="2bd6d5d1-effa-4aa5-8222-37da12221ee2" Mar 12 05:12:58.452119 kubelet[2890]: E0312 05:12:58.448560 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.452119 kubelet[2890]: E0312 05:12:58.449561 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k855v" Mar 12 05:12:58.452655 kubelet[2890]: E0312 05:12:58.449596 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k855v" Mar 12 05:12:58.452655 kubelet[2890]: E0312 05:12:58.449679 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k855v_kube-system(eee9e053-fb8a-4138-a624-424d81f26460)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k855v_kube-system(eee9e053-fb8a-4138-a624-424d81f26460)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k855v" podUID="eee9e053-fb8a-4138-a624-424d81f26460" Mar 12 05:12:58.461531 containerd[1633]: time="2026-03-12T05:12:58.461444754Z" level=error msg="Failed to destroy network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.461937 containerd[1633]: time="2026-03-12T05:12:58.461903906Z" level=error msg="encountered an error cleaning up failed sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.462016 containerd[1633]: time="2026-03-12T05:12:58.461967376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-frq46,Uid:14e6b323-cc02-482c-813c-8ef2159483f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.462278 kubelet[2890]: E0312 05:12:58.462208 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.462360 kubelet[2890]: E0312 05:12:58.462300 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d9476cfd5-frq46" Mar 12 05:12:58.462360 kubelet[2890]: E0312 05:12:58.462339 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d9476cfd5-frq46" Mar 12 05:12:58.462474 kubelet[2890]: E0312 05:12:58.462415 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9476cfd5-frq46_calico-system(14e6b323-cc02-482c-813c-8ef2159483f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9476cfd5-frq46_calico-system(14e6b323-cc02-482c-813c-8ef2159483f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d9476cfd5-frq46" podUID="14e6b323-cc02-482c-813c-8ef2159483f9" Mar 12 05:12:58.474804 containerd[1633]: time="2026-03-12T05:12:58.474607934Z" level=info msg="StartContainer for \"64ec4f253ac53ed636a4f0f16b09453535b83b1623d14b3c84fcf249801df4e2\" returns successfully" Mar 12 05:12:58.684567 containerd[1633]: time="2026-03-12T05:12:58.682691871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhzw,Uid:1e037c65-1cc8-4c93-b094-f3ed5dbdccf3,Namespace:calico-system,Attempt:0,}" Mar 12 05:12:58.788320 containerd[1633]: time="2026-03-12T05:12:58.788260056Z" level=error msg="Failed to destroy network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.789077 containerd[1633]: time="2026-03-12T05:12:58.788991915Z" level=error msg="encountered an error cleaning up failed sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.789077 containerd[1633]: time="2026-03-12T05:12:58.789061632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhzw,Uid:1e037c65-1cc8-4c93-b094-f3ed5dbdccf3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.789832 kubelet[2890]: E0312 05:12:58.789487 2890 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 05:12:58.790582 kubelet[2890]: E0312 05:12:58.789995 2890 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:58.790582 kubelet[2890]: E0312 05:12:58.790048 2890 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bhzw" Mar 12 05:12:58.790582 kubelet[2890]: E0312 05:12:58.790133 2890 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5bhzw_calico-system(1e037c65-1cc8-4c93-b094-f3ed5dbdccf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5bhzw_calico-system(1e037c65-1cc8-4c93-b094-f3ed5dbdccf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5bhzw" podUID="1e037c65-1cc8-4c93-b094-f3ed5dbdccf3" Mar 12 05:12:58.938685 kubelet[2890]: I0312 05:12:58.936979 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:12:58.946824 kubelet[2890]: I0312 05:12:58.946737 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:12:59.000177 kubelet[2890]: I0312 05:12:58.999715 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:12:59.060001 containerd[1633]: time="2026-03-12T05:12:59.059585464Z" level=info msg="StopPodSandbox for \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\"" Mar 12 05:12:59.062583 containerd[1633]: time="2026-03-12T05:12:59.062531299Z" level=info msg="Ensure that sandbox ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d in task-service has been cleanup successfully" Mar 12 05:12:59.063055 containerd[1633]: time="2026-03-12T05:12:59.062816491Z" level=info msg="StopPodSandbox for \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\"" Mar 12 05:12:59.063055 containerd[1633]: time="2026-03-12T05:12:59.063028139Z" level=info msg="Ensure that sandbox 46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef in task-service has been cleanup successfully" Mar 12 05:12:59.065110 containerd[1633]: time="2026-03-12T05:12:59.063265922Z" level=info msg="StopPodSandbox for \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\"" Mar 12 05:12:59.082357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef-shm.mount: Deactivated successfully. Mar 12 05:12:59.089585 containerd[1633]: time="2026-03-12T05:12:59.087051628Z" level=info msg="Ensure that sandbox c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787 in task-service has been cleanup successfully" Mar 12 05:12:59.082860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5-shm.mount: Deactivated successfully. Mar 12 05:12:59.083285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e-shm.mount: Deactivated successfully. Mar 12 05:12:59.083653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94-shm.mount: Deactivated successfully. Mar 12 05:12:59.083867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d-shm.mount: Deactivated successfully. Mar 12 05:12:59.114538 kubelet[2890]: I0312 05:12:59.113218 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:12:59.132287 containerd[1633]: time="2026-03-12T05:12:59.127852860Z" level=info msg="StopPodSandbox for \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\"" Mar 12 05:12:59.132287 containerd[1633]: time="2026-03-12T05:12:59.128106947Z" level=info msg="Ensure that sandbox ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851 in task-service has been cleanup successfully" Mar 12 05:12:59.144496 kubelet[2890]: I0312 05:12:59.144459 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:12:59.147962 containerd[1633]: time="2026-03-12T05:12:59.147922339Z" level=info msg="StopPodSandbox for \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\"" Mar 12 05:12:59.148530 containerd[1633]: time="2026-03-12T05:12:59.148484923Z" level=info msg="Ensure that sandbox 7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5 in task-service has been cleanup successfully" Mar 12 05:12:59.165900 kubelet[2890]: I0312 05:12:59.165856 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:12:59.167329 containerd[1633]: time="2026-03-12T05:12:59.167265958Z" level=info msg="StopPodSandbox for \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\"" Mar 12 05:12:59.167850 containerd[1633]: time="2026-03-12T05:12:59.167819134Z" level=info msg="Ensure that sandbox 954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e in task-service has been cleanup successfully" Mar 12 05:12:59.174146 kubelet[2890]: I0312 05:12:59.174110 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:12:59.176433 kubelet[2890]: I0312 05:12:59.176346 2890 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:12:59.177083 containerd[1633]: time="2026-03-12T05:12:59.176550381Z" level=info msg="StopPodSandbox for \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\"" Mar 12 05:12:59.177498 containerd[1633]: time="2026-03-12T05:12:59.177465923Z" level=info msg="Ensure that sandbox 52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a in task-service has been cleanup successfully" Mar 12 05:12:59.193698 containerd[1633]: time="2026-03-12T05:12:59.193449873Z" level=info msg="StopPodSandbox for \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\"" Mar 12 05:12:59.196124 containerd[1633]: time="2026-03-12T05:12:59.196094085Z" level=info msg="Ensure that sandbox a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94 in task-service has been cleanup successfully" Mar 12 05:12:59.258767 kubelet[2890]: I0312 05:12:59.255004 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9gbjs" podStartSLOduration=4.39792605 podStartE2EDuration="26.254956484s" podCreationTimestamp="2026-03-12 05:12:33 +0000 UTC" firstStartedPulling="2026-03-12 05:12:33.834903183 +0000 UTC m=+20.344626396" lastFinishedPulling="2026-03-12 05:12:55.691933616 +0000 UTC m=+42.201656830" observedRunningTime="2026-03-12 05:12:59.249879999 +0000 UTC m=+45.759603229" watchObservedRunningTime="2026-03-12 05:12:59.254956484 +0000 UTC m=+45.764679704" Mar 12 05:12:59.429576 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:12:59.427258 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:12:59.427272 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.849 [INFO][4057] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.852 [INFO][4057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" iface="eth0" netns="/var/run/netns/cni-8edbf5ae-2168-727d-1ec3-afe875c9d1d1" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.863 [INFO][4057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" iface="eth0" netns="/var/run/netns/cni-8edbf5ae-2168-727d-1ec3-afe875c9d1d1" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.868 [INFO][4057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" iface="eth0" netns="/var/run/netns/cni-8edbf5ae-2168-727d-1ec3-afe875c9d1d1" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.869 [INFO][4057] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:12:59.869 [INFO][4057] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.495 [INFO][4168] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.499 [INFO][4168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.500 [INFO][4168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.538 [WARNING][4168] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.538 [INFO][4168] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.542 [INFO][4168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.558461 containerd[1633]: 2026-03-12 05:13:00.548 [INFO][4057] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:00.558461 containerd[1633]: time="2026-03-12T05:13:00.555648158Z" level=info msg="TearDown network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" successfully" Mar 12 05:13:00.558461 containerd[1633]: time="2026-03-12T05:13:00.555688824Z" level=info msg="StopPodSandbox for \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" returns successfully" Mar 12 05:13:00.565807 systemd[1]: run-netns-cni\x2d8edbf5ae\x2d2168\x2d727d\x2d1ec3\x2dafe875c9d1d1.mount: Deactivated successfully. Mar 12 05:13:00.588966 containerd[1633]: time="2026-03-12T05:13:00.588801766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-4m7gh,Uid:04ffee6a-8064-4cfa-b109-0aa108677fba,Namespace:calico-system,Attempt:1,}" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.731 [INFO][4109] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.734 [INFO][4109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" iface="eth0" netns="/var/run/netns/cni-977b5a1c-d056-d777-0988-b04fa98d64ad" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.741 [INFO][4109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" iface="eth0" netns="/var/run/netns/cni-977b5a1c-d056-d777-0988-b04fa98d64ad" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.743 [INFO][4109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" iface="eth0" netns="/var/run/netns/cni-977b5a1c-d056-d777-0988-b04fa98d64ad" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.745 [INFO][4109] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:12:59.748 [INFO][4109] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.493 [INFO][4144] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.504 [INFO][4144] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.542 [INFO][4144] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.562 [WARNING][4144] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.562 [INFO][4144] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.570 [INFO][4144] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.593284 containerd[1633]: 2026-03-12 05:13:00.581 [INFO][4109] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:00.596262 containerd[1633]: time="2026-03-12T05:13:00.596223199Z" level=info msg="TearDown network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" successfully" Mar 12 05:13:00.596767 containerd[1633]: time="2026-03-12T05:13:00.596738733Z" level=info msg="StopPodSandbox for \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" returns successfully" Mar 12 05:13:00.600377 systemd[1]: run-netns-cni\x2d977b5a1c\x2dd056\x2dd777\x2d0988\x2db04fa98d64ad.mount: Deactivated successfully. Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.717 [INFO][4034] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.720 [INFO][4034] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" iface="eth0" netns="/var/run/netns/cni-9ac2ab3f-82ea-9b82-19d3-81c94d6f5be4" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.741 [INFO][4034] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" iface="eth0" netns="/var/run/netns/cni-9ac2ab3f-82ea-9b82-19d3-81c94d6f5be4" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.760 [INFO][4034] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" iface="eth0" netns="/var/run/netns/cni-9ac2ab3f-82ea-9b82-19d3-81c94d6f5be4" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.760 [INFO][4034] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:12:59.760 [INFO][4034] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.495 [INFO][4143] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.505 [INFO][4143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.575 [INFO][4143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.619 [WARNING][4143] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.619 [INFO][4143] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.625 [INFO][4143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.646916 containerd[1633]: 2026-03-12 05:13:00.641 [INFO][4034] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:00.652333 containerd[1633]: time="2026-03-12T05:13:00.649283523Z" level=info msg="TearDown network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" successfully" Mar 12 05:13:00.652333 containerd[1633]: time="2026-03-12T05:13:00.649320332Z" level=info msg="StopPodSandbox for \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" returns successfully" Mar 12 05:13:00.655131 systemd[1]: run-netns-cni\x2d9ac2ab3f\x2d82ea\x2d9b82\x2d19d3\x2d81c94d6f5be4.mount: Deactivated successfully. Mar 12 05:13:00.660627 containerd[1633]: time="2026-03-12T05:13:00.659323186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f6db979d-5bgk7,Uid:2bd6d5d1-effa-4aa5-8222-37da12221ee2,Namespace:calico-system,Attempt:1,}" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.692 [INFO][4087] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.693 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" iface="eth0" netns="/var/run/netns/cni-b0d029f7-715c-c662-51d3-d468d2d2cc83" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.693 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" iface="eth0" netns="/var/run/netns/cni-b0d029f7-715c-c662-51d3-d468d2d2cc83" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.696 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" iface="eth0" netns="/var/run/netns/cni-b0d029f7-715c-c662-51d3-d468d2d2cc83" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.696 [INFO][4087] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:12:59.697 [INFO][4087] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.491 [INFO][4137] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.517 [INFO][4137] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.623 [INFO][4137] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.639 [WARNING][4137] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.640 [INFO][4137] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.647 [INFO][4137] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.679717 containerd[1633]: 2026-03-12 05:13:00.664 [INFO][4087] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:00.687668 containerd[1633]: time="2026-03-12T05:13:00.682413454Z" level=info msg="TearDown network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" successfully" Mar 12 05:13:00.687668 containerd[1633]: time="2026-03-12T05:13:00.682459627Z" level=info msg="StopPodSandbox for \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" returns successfully" Mar 12 05:13:00.687668 containerd[1633]: time="2026-03-12T05:13:00.683968840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-frq46,Uid:14e6b323-cc02-482c-813c-8ef2159483f9,Namespace:calico-system,Attempt:1,}" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.846 [INFO][4056] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.846 [INFO][4056] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" iface="eth0" netns="/var/run/netns/cni-c622b426-67c8-bcf9-dd3a-f01c75ecfc47" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.848 [INFO][4056] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" iface="eth0" netns="/var/run/netns/cni-c622b426-67c8-bcf9-dd3a-f01c75ecfc47" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.867 [INFO][4056] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" iface="eth0" netns="/var/run/netns/cni-c622b426-67c8-bcf9-dd3a-f01c75ecfc47" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.867 [INFO][4056] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:12:59.868 [INFO][4056] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.509 [INFO][4169] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.517 [INFO][4169] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.648 [INFO][4169] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.677 [WARNING][4169] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.677 [INFO][4169] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.683 [INFO][4169] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.743844 containerd[1633]: 2026-03-12 05:13:00.706 [INFO][4056] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:00.749945 containerd[1633]: time="2026-03-12T05:13:00.744614837Z" level=info msg="TearDown network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" successfully" Mar 12 05:13:00.749945 containerd[1633]: time="2026-03-12T05:13:00.744653392Z" level=info msg="StopPodSandbox for \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" returns successfully" Mar 12 05:13:00.749945 containerd[1633]: time="2026-03-12T05:13:00.746842350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhzw,Uid:1e037c65-1cc8-4c93-b094-f3ed5dbdccf3,Namespace:calico-system,Attempt:1,}" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.790 [INFO][4054] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.790 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" iface="eth0" netns="/var/run/netns/cni-52d15452-2e51-c0bb-c3dd-ed00dd6041f7" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.791 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" iface="eth0" netns="/var/run/netns/cni-52d15452-2e51-c0bb-c3dd-ed00dd6041f7" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.804 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" iface="eth0" netns="/var/run/netns/cni-52d15452-2e51-c0bb-c3dd-ed00dd6041f7" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.804 [INFO][4054] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:12:59.804 [INFO][4054] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.494 [INFO][4155] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.517 [INFO][4155] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.684 [INFO][4155] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.704 [WARNING][4155] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.705 [INFO][4155] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.710 [INFO][4155] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.795186 containerd[1633]: 2026-03-12 05:13:00.742 [INFO][4054] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:00.806217 containerd[1633]: time="2026-03-12T05:13:00.805144698Z" level=info msg="TearDown network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" successfully" Mar 12 05:13:00.806217 containerd[1633]: time="2026-03-12T05:13:00.805206819Z" level=info msg="StopPodSandbox for \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" returns successfully" Mar 12 05:13:00.811195 kubelet[2890]: I0312 05:13:00.810222 2890 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-nginx-config\") pod \"78d0621d-bd85-4055-838b-0df73189181d\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " Mar 12 05:13:00.811195 kubelet[2890]: I0312 05:13:00.810346 2890 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr9fp\" (UniqueName: \"kubernetes.io/projected/78d0621d-bd85-4055-838b-0df73189181d-kube-api-access-rr9fp\") pod \"78d0621d-bd85-4055-838b-0df73189181d\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " Mar 12 05:13:00.811195 kubelet[2890]: I0312 05:13:00.810396 2890 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78d0621d-bd85-4055-838b-0df73189181d-whisker-backend-key-pair\") pod \"78d0621d-bd85-4055-838b-0df73189181d\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " Mar 12 05:13:00.811195 kubelet[2890]: I0312 05:13:00.810425 2890 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-whisker-ca-bundle\") pod \"78d0621d-bd85-4055-838b-0df73189181d\" (UID: \"78d0621d-bd85-4055-838b-0df73189181d\") " Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.798 [INFO][4101] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.803 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" iface="eth0" netns="/var/run/netns/cni-6ff67237-c742-99db-576b-0add304401c4" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.803 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" iface="eth0" netns="/var/run/netns/cni-6ff67237-c742-99db-576b-0add304401c4" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.806 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" iface="eth0" netns="/var/run/netns/cni-6ff67237-c742-99db-576b-0add304401c4" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.806 [INFO][4101] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:12:59.806 [INFO][4101] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.492 [INFO][4158] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.499 [INFO][4158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.710 [INFO][4158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.740 [WARNING][4158] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.740 [INFO][4158] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.750 [INFO][4158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.828146 containerd[1633]: 2026-03-12 05:13:00.781 [INFO][4101] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:00.836876 containerd[1633]: time="2026-03-12T05:13:00.828810666Z" level=info msg="TearDown network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" successfully" Mar 12 05:13:00.836876 containerd[1633]: time="2026-03-12T05:13:00.828852838Z" level=info msg="StopPodSandbox for \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" returns successfully" Mar 12 05:13:00.863544 kubelet[2890]: I0312 05:13:00.862322 2890 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "78d0621d-bd85-4055-838b-0df73189181d" (UID: "78d0621d-bd85-4055-838b-0df73189181d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 05:13:00.890897 kubelet[2890]: I0312 05:13:00.826615 2890 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "78d0621d-bd85-4055-838b-0df73189181d" (UID: "78d0621d-bd85-4055-838b-0df73189181d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 05:13:00.899764 containerd[1633]: time="2026-03-12T05:13:00.897678060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wp76k,Uid:e3733251-e6b4-40f8-bb40-330bae9490b4,Namespace:calico-system,Attempt:1,}" Mar 12 05:13:00.899764 containerd[1633]: time="2026-03-12T05:13:00.898089323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k855v,Uid:eee9e053-fb8a-4138-a624-424d81f26460,Namespace:kube-system,Attempt:1,}" Mar 12 05:13:00.902354 kubelet[2890]: I0312 05:13:00.901820 2890 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d0621d-bd85-4055-838b-0df73189181d-kube-api-access-rr9fp" (OuterVolumeSpecName: "kube-api-access-rr9fp") pod "78d0621d-bd85-4055-838b-0df73189181d" (UID: "78d0621d-bd85-4055-838b-0df73189181d"). InnerVolumeSpecName "kube-api-access-rr9fp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.862 [INFO][4100] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.872 [INFO][4100] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" iface="eth0" netns="/var/run/netns/cni-a30bab3a-3063-7f67-553c-0411228c9e1e" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.873 [INFO][4100] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" iface="eth0" netns="/var/run/netns/cni-a30bab3a-3063-7f67-553c-0411228c9e1e" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.879 [INFO][4100] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" iface="eth0" netns="/var/run/netns/cni-a30bab3a-3063-7f67-553c-0411228c9e1e" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.879 [INFO][4100] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:12:59.879 [INFO][4100] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.513 [INFO][4170] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.518 [INFO][4170] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.750 [INFO][4170] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.799 [WARNING][4170] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.799 [INFO][4170] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.829 [INFO][4170] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:00.906222 containerd[1633]: 2026-03-12 05:13:00.883 [INFO][4100] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:00.909548 containerd[1633]: time="2026-03-12T05:13:00.909473085Z" level=info msg="TearDown network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" successfully" Mar 12 05:13:00.909791 containerd[1633]: time="2026-03-12T05:13:00.909763867Z" level=info msg="StopPodSandbox for \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" returns successfully" Mar 12 05:13:00.921902 kubelet[2890]: I0312 05:13:00.921853 2890 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d0621d-bd85-4055-838b-0df73189181d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "78d0621d-bd85-4055-838b-0df73189181d" (UID: "78d0621d-bd85-4055-838b-0df73189181d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 05:13:00.923014 containerd[1633]: time="2026-03-12T05:13:00.922974885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckbxf,Uid:bc68d85e-8701-44b5-913d-95c4533a5538,Namespace:kube-system,Attempt:1,}" Mar 12 05:13:00.956348 kubelet[2890]: I0312 05:13:00.956298 2890 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rr9fp\" (UniqueName: \"kubernetes.io/projected/78d0621d-bd85-4055-838b-0df73189181d-kube-api-access-rr9fp\") on node \"srv-ro1yv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:13:00.956500 kubelet[2890]: I0312 05:13:00.956376 2890 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-whisker-ca-bundle\") on node \"srv-ro1yv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:13:00.956500 kubelet[2890]: I0312 05:13:00.956401 2890 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/78d0621d-bd85-4055-838b-0df73189181d-nginx-config\") on node \"srv-ro1yv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:13:01.058534 kubelet[2890]: I0312 05:13:01.056994 2890 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78d0621d-bd85-4055-838b-0df73189181d-whisker-backend-key-pair\") on node \"srv-ro1yv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:13:01.145140 systemd[1]: run-netns-cni\x2dc622b426\x2d67c8\x2dbcf9\x2ddd3a\x2df01c75ecfc47.mount: Deactivated successfully. Mar 12 05:13:01.147862 systemd[1]: run-netns-cni\x2db0d029f7\x2d715c\x2dc662\x2d51d3\x2dd468d2d2cc83.mount: Deactivated successfully. Mar 12 05:13:01.148090 systemd[1]: run-netns-cni\x2d6ff67237\x2dc742\x2d99db\x2d576b\x2d0add304401c4.mount: Deactivated successfully. Mar 12 05:13:01.148271 systemd[1]: run-netns-cni\x2d52d15452\x2d2e51\x2dc0bb\x2dc3dd\x2ded00dd6041f7.mount: Deactivated successfully. Mar 12 05:13:01.148444 systemd[1]: run-netns-cni\x2da30bab3a\x2d3063\x2d7f67\x2d553c\x2d0411228c9e1e.mount: Deactivated successfully. Mar 12 05:13:01.150705 systemd[1]: var-lib-kubelet-pods-78d0621d\x2dbd85\x2d4055\x2d838b\x2d0df73189181d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drr9fp.mount: Deactivated successfully. Mar 12 05:13:01.150905 systemd[1]: var-lib-kubelet-pods-78d0621d\x2dbd85\x2d4055\x2d838b\x2d0df73189181d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 05:13:01.493350 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:01.468646 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:01.468662 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:01.566337 kubelet[2890]: I0312 05:13:01.563749 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/515520a2-7949-4d9f-8ba6-efe0d38937c4-whisker-backend-key-pair\") pod \"whisker-f9c5dd5f-h6rph\" (UID: \"515520a2-7949-4d9f-8ba6-efe0d38937c4\") " pod="calico-system/whisker-f9c5dd5f-h6rph" Mar 12 05:13:01.566337 kubelet[2890]: I0312 05:13:01.563834 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/515520a2-7949-4d9f-8ba6-efe0d38937c4-nginx-config\") pod \"whisker-f9c5dd5f-h6rph\" (UID: \"515520a2-7949-4d9f-8ba6-efe0d38937c4\") " pod="calico-system/whisker-f9c5dd5f-h6rph" Mar 12 05:13:01.566337 kubelet[2890]: I0312 05:13:01.563871 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7fh\" (UniqueName: \"kubernetes.io/projected/515520a2-7949-4d9f-8ba6-efe0d38937c4-kube-api-access-4g7fh\") pod \"whisker-f9c5dd5f-h6rph\" (UID: \"515520a2-7949-4d9f-8ba6-efe0d38937c4\") " pod="calico-system/whisker-f9c5dd5f-h6rph" Mar 12 05:13:01.566337 kubelet[2890]: I0312 05:13:01.563909 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/515520a2-7949-4d9f-8ba6-efe0d38937c4-whisker-ca-bundle\") pod \"whisker-f9c5dd5f-h6rph\" (UID: \"515520a2-7949-4d9f-8ba6-efe0d38937c4\") " pod="calico-system/whisker-f9c5dd5f-h6rph" Mar 12 05:13:01.709989 kubelet[2890]: I0312 05:13:01.706296 2890 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d0621d-bd85-4055-838b-0df73189181d" path="/var/lib/kubelet/pods/78d0621d-bd85-4055-838b-0df73189181d/volumes" Mar 12 05:13:01.786543 kernel: calico-node[4273]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 05:13:01.876010 containerd[1633]: time="2026-03-12T05:13:01.874704526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9c5dd5f-h6rph,Uid:515520a2-7949-4d9f-8ba6-efe0d38937c4,Namespace:calico-system,Attempt:0,}" Mar 12 05:13:01.881835 systemd-networkd[1260]: cali5261df2ca9b: Link UP Mar 12 05:13:01.901033 systemd-networkd[1260]: cali5261df2ca9b: Gained carrier Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:00.764 [ERROR][4315] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:00.859 [INFO][4315] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0 calico-apiserver-d9476cfd5- calico-system 04ffee6a-8064-4cfa-b109-0aa108677fba 943 0 2026-03-12 05:12:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9476cfd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com calico-apiserver-d9476cfd5-4m7gh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5261df2ca9b [] [] }} ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:00.859 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.346 [INFO][4364] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" HandleID="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.492 [INFO][4364] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" HandleID="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ea990), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"calico-apiserver-d9476cfd5-4m7gh", "timestamp":"2026-03-12 05:13:01.346137674 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00019cf20)} Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.495 [INFO][4364] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.495 [INFO][4364] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.513 [INFO][4364] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.574 [INFO][4364] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.608 [INFO][4364] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.633 [INFO][4364] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.656 [INFO][4364] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.660 [INFO][4364] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.660 [INFO][4364] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.662 [INFO][4364] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4 Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.673 [INFO][4364] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.702 [INFO][4364] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.1/26] block=192.168.29.0/26 handle="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.718 [INFO][4364] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.1/26] handle="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.719 [INFO][4364] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:02.014996 containerd[1633]: 2026-03-12 05:13:01.719 [INFO][4364] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.1/26] IPv6=[] ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" HandleID="k8s-pod-network.fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.779 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"04ffee6a-8064-4cfa-b109-0aa108677fba", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-d9476cfd5-4m7gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5261df2ca9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.812 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.1/32] ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.812 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5261df2ca9b ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.899 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.900 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"04ffee6a-8064-4cfa-b109-0aa108677fba", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4", Pod:"calico-apiserver-d9476cfd5-4m7gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5261df2ca9b", MAC:"46:7a:bf:ff:ae:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.031589 containerd[1633]: 2026-03-12 05:13:01.942 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-4m7gh" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:02.124172 systemd[1]: run-containerd-runc-k8s.io-64ec4f253ac53ed636a4f0f16b09453535b83b1623d14b3c84fcf249801df4e2-runc.bBk5Bh.mount: Deactivated successfully. Mar 12 05:13:02.488393 systemd-networkd[1260]: calib26cf8949ec: Link UP Mar 12 05:13:02.489487 systemd-networkd[1260]: calib26cf8949ec: Gained carrier Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.021 [ERROR][4353] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.159 [INFO][4353] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0 calico-apiserver-d9476cfd5- calico-system 14e6b323-cc02-482c-813c-8ef2159483f9 939 0 2026-03-12 05:12:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9476cfd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com calico-apiserver-d9476cfd5-frq46 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib26cf8949ec [] [] }} ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.172 [INFO][4353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.457 [INFO][4416] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" HandleID="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.552 [INFO][4416] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" HandleID="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdeb0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"calico-apiserver-d9476cfd5-frq46", "timestamp":"2026-03-12 05:13:01.457419968 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ccdc0)} Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.553 [INFO][4416] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.720 [INFO][4416] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.721 [INFO][4416] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.735 [INFO][4416] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:01.809 [INFO][4416] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.009 [INFO][4416] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.062 [INFO][4416] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.076 [INFO][4416] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.090 [INFO][4416] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.149 [INFO][4416] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610 Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.283 [INFO][4416] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.359 [INFO][4416] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.2/26] block=192.168.29.0/26 handle="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.359 [INFO][4416] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.2/26] handle="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.359 [INFO][4416] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:02.707978 containerd[1633]: 2026-03-12 05:13:02.359 [INFO][4416] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.2/26] IPv6=[] ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" HandleID="k8s-pod-network.6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.415 [INFO][4353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"14e6b323-cc02-482c-813c-8ef2159483f9", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-d9476cfd5-frq46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib26cf8949ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.416 [INFO][4353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.2/32] ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.416 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib26cf8949ec ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.491 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.536 [INFO][4353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"14e6b323-cc02-482c-813c-8ef2159483f9", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610", Pod:"calico-apiserver-d9476cfd5-frq46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib26cf8949ec", MAC:"52:1e:70:b7:bc:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.713869 containerd[1633]: 2026-03-12 05:13:02.662 [INFO][4353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610" Namespace="calico-system" Pod="calico-apiserver-d9476cfd5-frq46" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:02.756537 systemd-networkd[1260]: cali6c0aa17a69a: Link UP Mar 12 05:13:02.775810 systemd-networkd[1260]: cali6c0aa17a69a: Gained carrier Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:00.873 [ERROR][4332] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:00.909 [INFO][4332] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0 calico-kube-controllers-74f6db979d- calico-system 2bd6d5d1-effa-4aa5-8222-37da12221ee2 938 0 2026-03-12 05:12:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74f6db979d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com calico-kube-controllers-74f6db979d-5bgk7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c0aa17a69a [] [] }} ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:00.910 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:01.756 [INFO][4352] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" HandleID="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:01.782 [INFO][4352] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" HandleID="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043a7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"calico-kube-controllers-74f6db979d-5bgk7", "timestamp":"2026-03-12 05:13:01.756362274 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f2160)} Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:01.782 [INFO][4352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.375 [INFO][4352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.375 [INFO][4352] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.481 [INFO][4352] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.566 [INFO][4352] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.577 [INFO][4352] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.601 [INFO][4352] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.655 [INFO][4352] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.655 [INFO][4352] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.660 [INFO][4352] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84 Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.670 [INFO][4352] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.687 [INFO][4352] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.3/26] block=192.168.29.0/26 handle="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.687 [INFO][4352] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.3/26] handle="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.687 [INFO][4352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:02.904674 containerd[1633]: 2026-03-12 05:13:02.687 [INFO][4352] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.3/26] IPv6=[] ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" HandleID="k8s-pod-network.a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.704 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0", GenerateName:"calico-kube-controllers-74f6db979d-", Namespace:"calico-system", SelfLink:"", UID:"2bd6d5d1-effa-4aa5-8222-37da12221ee2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f6db979d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-74f6db979d-5bgk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c0aa17a69a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.705 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.3/32] ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.705 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c0aa17a69a ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.747 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.766 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0", GenerateName:"calico-kube-controllers-74f6db979d-", Namespace:"calico-system", SelfLink:"", UID:"2bd6d5d1-effa-4aa5-8222-37da12221ee2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f6db979d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84", Pod:"calico-kube-controllers-74f6db979d-5bgk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c0aa17a69a", MAC:"82:94:02:4b:ae:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:02.930600 containerd[1633]: 2026-03-12 05:13:02.786 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84" Namespace="calico-system" Pod="calico-kube-controllers-74f6db979d-5bgk7" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:03.150206 systemd-networkd[1260]: cali998b2968d08: Link UP Mar 12 05:13:03.152129 systemd-networkd[1260]: cali998b2968d08: Gained carrier Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:01.840 [INFO][4387] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0 coredns-674b8bbfcf- kube-system eee9e053-fb8a-4138-a624-424d81f26460 942 0 2026-03-12 05:12:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com coredns-674b8bbfcf-k855v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali998b2968d08 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:01.845 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:01.982 [INFO][4486] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" HandleID="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.019 [INFO][4486] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" HandleID="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7860), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-k855v", "timestamp":"2026-03-12 05:13:01.982232922 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002711e0)} Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.019 [INFO][4486] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.692 [INFO][4486] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.692 [INFO][4486] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.697 [INFO][4486] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.799 [INFO][4486] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.925 [INFO][4486] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.948 [INFO][4486] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.965 [INFO][4486] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.965 [INFO][4486] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:02.971 [INFO][4486] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561 Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:03.010 [INFO][4486] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:03.055 [INFO][4486] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.4/26] block=192.168.29.0/26 handle="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:03.055 [INFO][4486] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.4/26] handle="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:03.079 [INFO][4486] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:03.325570 containerd[1633]: 2026-03-12 05:13:03.080 [INFO][4486] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.4/26] IPv6=[] ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" HandleID="k8s-pod-network.00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.128 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eee9e053-fb8a-4138-a624-424d81f26460", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-k855v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali998b2968d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.129 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.4/32] ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.132 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali998b2968d08 ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.154 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.155 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eee9e053-fb8a-4138-a624-424d81f26460", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561", Pod:"coredns-674b8bbfcf-k855v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali998b2968d08", MAC:"0e:a6:43:8b:68:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.334627 containerd[1633]: 2026-03-12 05:13:03.274 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561" Namespace="kube-system" Pod="coredns-674b8bbfcf-k855v" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:03.403322 systemd-networkd[1260]: cali12aada52f75: Link UP Mar 12 05:13:03.404025 systemd-networkd[1260]: cali12aada52f75: Gained carrier Mar 12 05:13:03.422276 containerd[1633]: time="2026-03-12T05:13:03.387246053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:03.422276 containerd[1633]: time="2026-03-12T05:13:03.404969094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:03.422276 containerd[1633]: time="2026-03-12T05:13:03.405010160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.438055 containerd[1633]: time="2026-03-12T05:13:03.437162379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.452911 systemd-networkd[1260]: cali5261df2ca9b: Gained IPv6LL Mar 12 05:13:03.527736 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:03.519246 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:03.519438 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:03.582934 systemd-networkd[1260]: calif75c59934fd: Link UP Mar 12 05:13:03.584024 systemd-networkd[1260]: calif75c59934fd: Gained carrier Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:01.918 [INFO][4403] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0 goldmane-5b85766d88- calico-system e3733251-e6b4-40f8-bb40-330bae9490b4 941 0 2026-03-12 05:12:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com goldmane-5b85766d88-wp76k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali12aada52f75 [] [] }} ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:01.919 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:02.369 [INFO][4501] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" HandleID="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:02.424 [INFO][4501] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" HandleID="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003801b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"goldmane-5b85766d88-wp76k", "timestamp":"2026-03-12 05:13:02.369153316 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00044a840)} Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:02.424 [INFO][4501] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.080 [INFO][4501] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.080 [INFO][4501] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.096 [INFO][4501] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.115 [INFO][4501] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.125 [INFO][4501] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.128 [INFO][4501] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.141 [INFO][4501] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.141 [INFO][4501] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.251 [INFO][4501] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58 Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.270 [INFO][4501] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.284 [INFO][4501] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.5/26] block=192.168.29.0/26 handle="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.284 [INFO][4501] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.5/26] handle="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.285 [INFO][4501] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:03.616955 containerd[1633]: 2026-03-12 05:13:03.285 [INFO][4501] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.5/26] IPv6=[] ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" HandleID="k8s-pod-network.95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.322 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e3733251-e6b4-40f8-bb40-330bae9490b4", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-5b85766d88-wp76k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12aada52f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.322 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.5/32] ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.322 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12aada52f75 ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.486 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.488 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e3733251-e6b4-40f8-bb40-330bae9490b4", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58", Pod:"goldmane-5b85766d88-wp76k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12aada52f75", MAC:"a2:a2:75:49:44:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.626734 containerd[1633]: 2026-03-12 05:13:03.555 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58" Namespace="calico-system" Pod="goldmane-5b85766d88-wp76k" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:03.675598 containerd[1633]: time="2026-03-12T05:13:03.675257528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:03.675598 containerd[1633]: time="2026-03-12T05:13:03.675342002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:03.676402 containerd[1633]: time="2026-03-12T05:13:03.675386532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.676402 containerd[1633]: time="2026-03-12T05:13:03.675533323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:01.669 [INFO][4367] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0 csi-node-driver- calico-system 1e037c65-1cc8-4c93-b094-f3ed5dbdccf3 945 0 2026-03-12 05:12:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com csi-node-driver-5bhzw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif75c59934fd [] [] }} ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:01.693 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:02.465 [INFO][4470] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" HandleID="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:02.485 [INFO][4470] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" HandleID="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efda0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"csi-node-driver-5bhzw", "timestamp":"2026-03-12 05:13:02.465866344 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000555080)} Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:02.527 [INFO][4470] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.336 [INFO][4470] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.337 [INFO][4470] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.379 [INFO][4470] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.395 [INFO][4470] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.418 [INFO][4470] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.421 [INFO][4470] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.427 [INFO][4470] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.427 [INFO][4470] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.432 [INFO][4470] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.480 [INFO][4470] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.493 [INFO][4470] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.6/26] block=192.168.29.0/26 handle="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.493 [INFO][4470] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.6/26] handle="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.496 [INFO][4470] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:03.737302 containerd[1633]: 2026-03-12 05:13:03.496 [INFO][4470] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.6/26] IPv6=[] ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" HandleID="k8s-pod-network.a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.560 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-5bhzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif75c59934fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.560 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.6/32] ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.560 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif75c59934fd ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.592 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.595 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c", Pod:"csi-node-driver-5bhzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif75c59934fd", MAC:"5e:1a:30:ef:00:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:03.744611 containerd[1633]: 2026-03-12 05:13:03.682 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c" Namespace="calico-system" Pod="csi-node-driver-5bhzw" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:03.860015 containerd[1633]: time="2026-03-12T05:13:03.827818424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:03.860015 containerd[1633]: time="2026-03-12T05:13:03.827928118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:03.860015 containerd[1633]: time="2026-03-12T05:13:03.827950688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.860015 containerd[1633]: time="2026-03-12T05:13:03.828125272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.874813 containerd[1633]: time="2026-03-12T05:13:03.866979578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:03.874813 containerd[1633]: time="2026-03-12T05:13:03.867092076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:03.874813 containerd[1633]: time="2026-03-12T05:13:03.867116509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:03.874813 containerd[1633]: time="2026-03-12T05:13:03.867249701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:04.072718 containerd[1633]: time="2026-03-12T05:13:04.068954508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:04.072718 containerd[1633]: time="2026-03-12T05:13:04.071274358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:04.072718 containerd[1633]: time="2026-03-12T05:13:04.071873198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:04.079788 containerd[1633]: time="2026-03-12T05:13:04.079263517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:04.116418 systemd-networkd[1260]: cali76f956c57bb: Link UP Mar 12 05:13:04.120791 systemd-networkd[1260]: cali76f956c57bb: Gained carrier Mar 12 05:13:04.137856 systemd-networkd[1260]: cali3c227c8a893: Link UP Mar 12 05:13:04.144140 systemd-networkd[1260]: cali3c227c8a893: Gained carrier Mar 12 05:13:04.224027 systemd-networkd[1260]: calib26cf8949ec: Gained IPv6LL Mar 12 05:13:04.224469 systemd-networkd[1260]: cali6c0aa17a69a: Gained IPv6LL Mar 12 05:13:04.422682 systemd-networkd[1260]: cali998b2968d08: Gained IPv6LL Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:01.924 [INFO][4379] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0 coredns-674b8bbfcf- kube-system bc68d85e-8701-44b5-913d-95c4533a5538 944 0 2026-03-12 05:12:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com coredns-674b8bbfcf-ckbxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c227c8a893 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:01.925 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:02.517 [INFO][4511] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" HandleID="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:02.669 [INFO][4511] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" HandleID="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003539c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-ckbxf", "timestamp":"2026-03-12 05:13:02.517789962 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00053c2c0)} Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:02.670 [INFO][4511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.497 [INFO][4511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.497 [INFO][4511] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.506 [INFO][4511] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.538 [INFO][4511] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.586 [INFO][4511] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.591 [INFO][4511] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.729 [INFO][4511] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.733 [INFO][4511] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.760 [INFO][4511] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6 Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.823 [INFO][4511] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.851 [INFO][4511] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.7/26] block=192.168.29.0/26 handle="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.851 [INFO][4511] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.7/26] handle="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.852 [INFO][4511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:04.564802 containerd[1633]: 2026-03-12 05:13:03.852 [INFO][4511] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.7/26] IPv6=[] ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" HandleID="k8s-pod-network.503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:03.962 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc68d85e-8701-44b5-913d-95c4533a5538", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-ckbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c227c8a893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:03.966 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.7/32] ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:03.966 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c227c8a893 ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:04.147 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:04.151 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc68d85e-8701-44b5-913d-95c4533a5538", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6", Pod:"coredns-674b8bbfcf-ckbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c227c8a893", MAC:"06:47:a5:16:c6:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:04.569480 containerd[1633]: 2026-03-12 05:13:04.351 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ckbxf" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:04.583771 systemd-networkd[1260]: vxlan.calico: Link UP Mar 12 05:13:04.583777 systemd-networkd[1260]: vxlan.calico: Gained carrier Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:02.730 [INFO][4529] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0 whisker-f9c5dd5f- calico-system 515520a2-7949-4d9f-8ba6-efe0d38937c4 963 0 2026-03-12 05:13:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f9c5dd5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-ro1yv.gb1.brightbox.com whisker-f9c5dd5f-h6rph eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali76f956c57bb [] [] }} ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:02.730 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.037 [INFO][4562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" HandleID="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.227 [INFO][4562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" HandleID="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006f5260), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ro1yv.gb1.brightbox.com", "pod":"whisker-f9c5dd5f-h6rph", "timestamp":"2026-03-12 05:13:03.037400634 +0000 UTC"}, Hostname:"srv-ro1yv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a2160)} Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.238 [INFO][4562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.852 [INFO][4562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.852 [INFO][4562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ro1yv.gb1.brightbox.com' Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.856 [INFO][4562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.872 [INFO][4562] ipam/ipam.go 409: Looking up existing affinities for host host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.886 [INFO][4562] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.895 [INFO][4562] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.901 [INFO][4562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.901 [INFO][4562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.906 [INFO][4562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.916 [INFO][4562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.934 [INFO][4562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.8/26] block=192.168.29.0/26 handle="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.936 [INFO][4562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.8/26] handle="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" host="srv-ro1yv.gb1.brightbox.com" Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.936 [INFO][4562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:04.610126 containerd[1633]: 2026-03-12 05:13:03.936 [INFO][4562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.8/26] IPv6=[] ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" HandleID="k8s-pod-network.5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:03.943 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0", GenerateName:"whisker-f9c5dd5f-", Namespace:"calico-system", SelfLink:"", UID:"515520a2-7949-4d9f-8ba6-efe0d38937c4", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f9c5dd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"", Pod:"whisker-f9c5dd5f-h6rph", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f956c57bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:03.944 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.8/32] ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:03.944 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76f956c57bb ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:04.105 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:04.119 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0", GenerateName:"whisker-f9c5dd5f-", Namespace:"calico-system", SelfLink:"", UID:"515520a2-7949-4d9f-8ba6-efe0d38937c4", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f9c5dd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c", Pod:"whisker-f9c5dd5f-h6rph", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f956c57bb", MAC:"26:b9:59:49:89:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:04.625176 containerd[1633]: 2026-03-12 05:13:04.271 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c" Namespace="calico-system" Pod="whisker-f9c5dd5f-h6rph" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f9c5dd5f--h6rph-eth0" Mar 12 05:13:04.670059 systemd-networkd[1260]: calif75c59934fd: Gained IPv6LL Mar 12 05:13:04.714439 containerd[1633]: time="2026-03-12T05:13:04.674975190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:04.714439 containerd[1633]: time="2026-03-12T05:13:04.675386257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:04.714439 containerd[1633]: time="2026-03-12T05:13:04.675416160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:04.714439 containerd[1633]: time="2026-03-12T05:13:04.675774259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:04.744384 systemd-networkd[1260]: cali12aada52f75: Gained IPv6LL Mar 12 05:13:05.206383 containerd[1633]: time="2026-03-12T05:13:05.206037450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k855v,Uid:eee9e053-fb8a-4138-a624-424d81f26460,Namespace:kube-system,Attempt:1,} returns sandbox id \"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561\"" Mar 12 05:13:05.233109 containerd[1633]: time="2026-03-12T05:13:05.232876056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-4m7gh,Uid:04ffee6a-8064-4cfa-b109-0aa108677fba,Namespace:calico-system,Attempt:1,} returns sandbox id \"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4\"" Mar 12 05:13:05.233109 containerd[1633]: time="2026-03-12T05:13:05.233067015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9476cfd5-frq46,Uid:14e6b323-cc02-482c-813c-8ef2159483f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610\"" Mar 12 05:13:05.245485 systemd-networkd[1260]: cali76f956c57bb: Gained IPv6LL Mar 12 05:13:05.296060 containerd[1633]: time="2026-03-12T05:13:05.295589349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhzw,Uid:1e037c65-1cc8-4c93-b094-f3ed5dbdccf3,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c\"" Mar 12 05:13:05.313409 containerd[1633]: time="2026-03-12T05:13:05.309027021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f6db979d-5bgk7,Uid:2bd6d5d1-effa-4aa5-8222-37da12221ee2,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84\"" Mar 12 05:13:05.373682 systemd-networkd[1260]: cali3c227c8a893: Gained IPv6LL Mar 12 05:13:05.399632 containerd[1633]: time="2026-03-12T05:13:05.379620895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:05.399632 containerd[1633]: time="2026-03-12T05:13:05.379709181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:05.399632 containerd[1633]: time="2026-03-12T05:13:05.379740708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:05.399632 containerd[1633]: time="2026-03-12T05:13:05.379886788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:05.434077 containerd[1633]: time="2026-03-12T05:13:05.433649407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wp76k,Uid:e3733251-e6b4-40f8-bb40-330bae9490b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58\"" Mar 12 05:13:05.557787 containerd[1633]: time="2026-03-12T05:13:05.550834998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:13:05.557787 containerd[1633]: time="2026-03-12T05:13:05.550944342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:13:05.557787 containerd[1633]: time="2026-03-12T05:13:05.550970509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:05.557787 containerd[1633]: time="2026-03-12T05:13:05.551102478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:13:05.571523 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:05.564597 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:05.564606 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:05.581291 kubelet[2890]: E0312 05:13:05.581131 2890 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.884s" Mar 12 05:13:05.602209 containerd[1633]: time="2026-03-12T05:13:05.602144461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 05:13:05.723800 containerd[1633]: time="2026-03-12T05:13:05.716731008Z" level=info msg="CreateContainer within sandbox \"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 05:13:05.758035 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Mar 12 05:13:05.892572 containerd[1633]: time="2026-03-12T05:13:05.892494155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ckbxf,Uid:bc68d85e-8701-44b5-913d-95c4533a5538,Namespace:kube-system,Attempt:1,} returns sandbox id \"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6\"" Mar 12 05:13:05.911638 containerd[1633]: time="2026-03-12T05:13:05.911591560Z" level=info msg="CreateContainer within sandbox \"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 05:13:05.986208 containerd[1633]: time="2026-03-12T05:13:05.985829115Z" level=info msg="CreateContainer within sandbox \"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd2726ff7b8a025aae9f3e7a433eb2243c8b094128379e1a902fddfadca998c3\"" Mar 12 05:13:05.992016 containerd[1633]: time="2026-03-12T05:13:05.991970598Z" level=info msg="StartContainer for \"dd2726ff7b8a025aae9f3e7a433eb2243c8b094128379e1a902fddfadca998c3\"" Mar 12 05:13:05.997164 containerd[1633]: time="2026-03-12T05:13:05.996643204Z" level=info msg="CreateContainer within sandbox \"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"458b7e045752474e5c8455713ee937202dca9489d4c770f522d31e5ed4d5630f\"" Mar 12 05:13:05.997164 containerd[1633]: time="2026-03-12T05:13:05.997061871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f9c5dd5f-h6rph,Uid:515520a2-7949-4d9f-8ba6-efe0d38937c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c\"" Mar 12 05:13:05.999011 containerd[1633]: time="2026-03-12T05:13:05.998122338Z" level=info msg="StartContainer for \"458b7e045752474e5c8455713ee937202dca9489d4c770f522d31e5ed4d5630f\"" Mar 12 05:13:06.128470 containerd[1633]: time="2026-03-12T05:13:06.128423080Z" level=info msg="StartContainer for \"458b7e045752474e5c8455713ee937202dca9489d4c770f522d31e5ed4d5630f\" returns successfully" Mar 12 05:13:06.146011 containerd[1633]: time="2026-03-12T05:13:06.145452098Z" level=info msg="StartContainer for \"dd2726ff7b8a025aae9f3e7a433eb2243c8b094128379e1a902fddfadca998c3\" returns successfully" Mar 12 05:13:06.471748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206578906.mount: Deactivated successfully. Mar 12 05:13:06.844638 kubelet[2890]: I0312 05:13:06.822596 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k855v" podStartSLOduration=46.807942853 podStartE2EDuration="46.807942853s" podCreationTimestamp="2026-03-12 05:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:13:06.807586906 +0000 UTC m=+53.317310143" watchObservedRunningTime="2026-03-12 05:13:06.807942853 +0000 UTC m=+53.317666060" Mar 12 05:13:06.893619 kubelet[2890]: I0312 05:13:06.891966 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ckbxf" podStartSLOduration=46.891943166 podStartE2EDuration="46.891943166s" podCreationTimestamp="2026-03-12 05:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:13:06.866734378 +0000 UTC m=+53.376457607" watchObservedRunningTime="2026-03-12 05:13:06.891943166 +0000 UTC m=+53.401666385" Mar 12 05:13:09.683254 containerd[1633]: time="2026-03-12T05:13:09.682636689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 05:13:09.711239 containerd[1633]: time="2026-03-12T05:13:09.710637428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.083651763s" Mar 12 05:13:09.713082 containerd[1633]: time="2026-03-12T05:13:09.712107512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 05:13:09.717981 containerd[1633]: time="2026-03-12T05:13:09.717280353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 05:13:09.731461 containerd[1633]: time="2026-03-12T05:13:09.730636997Z" level=info msg="CreateContainer within sandbox \"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 05:13:09.738822 containerd[1633]: time="2026-03-12T05:13:09.738779133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:09.761175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167171645.mount: Deactivated successfully. Mar 12 05:13:09.766590 containerd[1633]: time="2026-03-12T05:13:09.766488160Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:09.768243 containerd[1633]: time="2026-03-12T05:13:09.767906751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:09.780124 containerd[1633]: time="2026-03-12T05:13:09.780068654Z" level=info msg="CreateContainer within sandbox \"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1606bcf1a9505be6ea76ea7d04ab17fd9de0e3e3d8083d7a7f9c174ea2fb4d30\"" Mar 12 05:13:09.784464 containerd[1633]: time="2026-03-12T05:13:09.784429892Z" level=info msg="StartContainer for \"1606bcf1a9505be6ea76ea7d04ab17fd9de0e3e3d8083d7a7f9c174ea2fb4d30\"" Mar 12 05:13:09.990901 containerd[1633]: time="2026-03-12T05:13:09.990696177Z" level=info msg="StartContainer for \"1606bcf1a9505be6ea76ea7d04ab17fd9de0e3e3d8083d7a7f9c174ea2fb4d30\" returns successfully" Mar 12 05:13:10.757612 systemd[1]: run-containerd-runc-k8s.io-1606bcf1a9505be6ea76ea7d04ab17fd9de0e3e3d8083d7a7f9c174ea2fb4d30-runc.tWULu8.mount: Deactivated successfully. Mar 12 05:13:11.594368 containerd[1633]: time="2026-03-12T05:13:11.593757330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:11.596109 containerd[1633]: time="2026-03-12T05:13:11.596000290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 05:13:11.598883 containerd[1633]: time="2026-03-12T05:13:11.598747427Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:11.606046 containerd[1633]: time="2026-03-12T05:13:11.605494009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:11.607803 containerd[1633]: time="2026-03-12T05:13:11.607597902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.890270784s" Mar 12 05:13:11.607953 containerd[1633]: time="2026-03-12T05:13:11.607923148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 05:13:11.612305 containerd[1633]: time="2026-03-12T05:13:11.611192485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 05:13:11.641481 containerd[1633]: time="2026-03-12T05:13:11.640709915Z" level=info msg="CreateContainer within sandbox \"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 05:13:11.705838 containerd[1633]: time="2026-03-12T05:13:11.705729292Z" level=info msg="CreateContainer within sandbox \"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6430a808baac35989ae711827c8221f09da9bd41363ddc07a274f29c59e421f2\"" Mar 12 05:13:11.707075 containerd[1633]: time="2026-03-12T05:13:11.707014924Z" level=info msg="StartContainer for \"6430a808baac35989ae711827c8221f09da9bd41363ddc07a274f29c59e421f2\"" Mar 12 05:13:11.848679 containerd[1633]: time="2026-03-12T05:13:11.848364579Z" level=info msg="StartContainer for \"6430a808baac35989ae711827c8221f09da9bd41363ddc07a274f29c59e421f2\" returns successfully" Mar 12 05:13:11.987026 kubelet[2890]: I0312 05:13:11.982459 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:13:13.775844 containerd[1633]: time="2026-03-12T05:13:13.775534741Z" level=info msg="StopPodSandbox for \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\"" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.383 [WARNING][5235] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"14e6b323-cc02-482c-813c-8ef2159483f9", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610", Pod:"calico-apiserver-d9476cfd5-frq46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib26cf8949ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.389 [INFO][5235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.389 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" iface="eth0" netns="" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.389 [INFO][5235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.389 [INFO][5235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.646 [INFO][5242] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.647 [INFO][5242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.648 [INFO][5242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.668 [WARNING][5242] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.668 [INFO][5242] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.671 [INFO][5242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:14.685266 containerd[1633]: 2026-03-12 05:13:14.679 [INFO][5235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:14.699569 containerd[1633]: time="2026-03-12T05:13:14.699251448Z" level=info msg="TearDown network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" successfully" Mar 12 05:13:14.699569 containerd[1633]: time="2026-03-12T05:13:14.699329787Z" level=info msg="StopPodSandbox for \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" returns successfully" Mar 12 05:13:14.852180 containerd[1633]: time="2026-03-12T05:13:14.852024328Z" level=info msg="RemovePodSandbox for \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\"" Mar 12 05:13:14.852180 containerd[1633]: time="2026-03-12T05:13:14.852182150Z" level=info msg="Forcibly stopping sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\"" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.068 [WARNING][5256] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"14e6b323-cc02-482c-813c-8ef2159483f9", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610", Pod:"calico-apiserver-d9476cfd5-frq46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib26cf8949ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.074 [INFO][5256] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.075 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" iface="eth0" netns="" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.075 [INFO][5256] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.075 [INFO][5256] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.184 [INFO][5264] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.185 [INFO][5264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.185 [INFO][5264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.201 [WARNING][5264] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.202 [INFO][5264] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" HandleID="k8s-pod-network.7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--frq46-eth0" Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.206 [INFO][5264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:15.214766 containerd[1633]: 2026-03-12 05:13:15.210 [INFO][5256] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5" Mar 12 05:13:15.216225 containerd[1633]: time="2026-03-12T05:13:15.214795730Z" level=info msg="TearDown network for sandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" successfully" Mar 12 05:13:15.255526 containerd[1633]: time="2026-03-12T05:13:15.254960777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:15.303226 containerd[1633]: time="2026-03-12T05:13:15.301671641Z" level=info msg="RemovePodSandbox \"7d47517c9c36b5871b06d00f2329fa0b0f5ffd56a0d54967da4cb059c007ded5\" returns successfully" Mar 12 05:13:15.325494 containerd[1633]: time="2026-03-12T05:13:15.325446153Z" level=info msg="StopPodSandbox for \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\"" Mar 12 05:13:15.491753 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:15.488578 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:15.488746 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.461 [WARNING][5279] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e3733251-e6b4-40f8-bb40-330bae9490b4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58", Pod:"goldmane-5b85766d88-wp76k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12aada52f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.463 [INFO][5279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.463 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" iface="eth0" netns="" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.463 [INFO][5279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.463 [INFO][5279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.566 [INFO][5286] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.566 [INFO][5286] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.567 [INFO][5286] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.585 [WARNING][5286] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.585 [INFO][5286] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.588 [INFO][5286] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:15.597562 containerd[1633]: 2026-03-12 05:13:15.592 [INFO][5279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.599714 containerd[1633]: time="2026-03-12T05:13:15.597593694Z" level=info msg="TearDown network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" successfully" Mar 12 05:13:15.599714 containerd[1633]: time="2026-03-12T05:13:15.597629341Z" level=info msg="StopPodSandbox for \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" returns successfully" Mar 12 05:13:15.599714 containerd[1633]: time="2026-03-12T05:13:15.598443929Z" level=info msg="RemovePodSandbox for \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\"" Mar 12 05:13:15.599714 containerd[1633]: time="2026-03-12T05:13:15.598480281Z" level=info msg="Forcibly stopping sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\"" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.720 [WARNING][5301] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e3733251-e6b4-40f8-bb40-330bae9490b4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58", Pod:"goldmane-5b85766d88-wp76k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12aada52f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.720 [INFO][5301] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.720 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" iface="eth0" netns="" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.720 [INFO][5301] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.720 [INFO][5301] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.895 [INFO][5316] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.895 [INFO][5316] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.895 [INFO][5316] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.916 [WARNING][5316] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.916 [INFO][5316] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" HandleID="k8s-pod-network.c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Workload="srv--ro1yv.gb1.brightbox.com-k8s-goldmane--5b85766d88--wp76k-eth0" Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.921 [INFO][5316] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:15.933430 containerd[1633]: 2026-03-12 05:13:15.925 [INFO][5301] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787" Mar 12 05:13:15.933430 containerd[1633]: time="2026-03-12T05:13:15.931981285Z" level=info msg="TearDown network for sandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" successfully" Mar 12 05:13:15.938849 containerd[1633]: time="2026-03-12T05:13:15.938801461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:15.939064 containerd[1633]: time="2026-03-12T05:13:15.939036916Z" level=info msg="RemovePodSandbox \"c045a31ceeb28d6ac3706563ce74f57861573f16c56211ae12ae10e143e54787\" returns successfully" Mar 12 05:13:15.940093 containerd[1633]: time="2026-03-12T05:13:15.940063781Z" level=info msg="StopPodSandbox for \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\"" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.141 [WARNING][5334] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"04ffee6a-8064-4cfa-b109-0aa108677fba", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4", Pod:"calico-apiserver-d9476cfd5-4m7gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5261df2ca9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.142 [INFO][5334] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.142 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" iface="eth0" netns="" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.142 [INFO][5334] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.142 [INFO][5334] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.299 [INFO][5347] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.300 [INFO][5347] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.300 [INFO][5347] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.319 [WARNING][5347] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.319 [INFO][5347] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.327 [INFO][5347] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:16.350268 containerd[1633]: 2026-03-12 05:13:16.339 [INFO][5334] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.351908 containerd[1633]: time="2026-03-12T05:13:16.351207339Z" level=info msg="TearDown network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" successfully" Mar 12 05:13:16.351908 containerd[1633]: time="2026-03-12T05:13:16.351247475Z" level=info msg="StopPodSandbox for \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" returns successfully" Mar 12 05:13:16.411718 containerd[1633]: time="2026-03-12T05:13:16.411486707Z" level=info msg="RemovePodSandbox for \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\"" Mar 12 05:13:16.412064 containerd[1633]: time="2026-03-12T05:13:16.411976543Z" level=info msg="Forcibly stopping sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\"" Mar 12 05:13:16.678186 containerd[1633]: time="2026-03-12T05:13:16.676478174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:16.682569 containerd[1633]: time="2026-03-12T05:13:16.682470459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 05:13:16.686542 containerd[1633]: time="2026-03-12T05:13:16.685814975Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:16.692094 containerd[1633]: time="2026-03-12T05:13:16.692052440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.532 [WARNING][5363] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0", GenerateName:"calico-apiserver-d9476cfd5-", Namespace:"calico-system", SelfLink:"", UID:"04ffee6a-8064-4cfa-b109-0aa108677fba", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9476cfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"fcfb8784f21f7dd2408f8f93d44f798cd9ea48facb275e114b2ede10849300d4", Pod:"calico-apiserver-d9476cfd5-4m7gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5261df2ca9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.535 [INFO][5363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.535 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" iface="eth0" netns="" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.535 [INFO][5363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.535 [INFO][5363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.698 [INFO][5370] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.698 [INFO][5370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.698 [INFO][5370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.709 [WARNING][5370] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.709 [INFO][5370] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" HandleID="k8s-pod-network.ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--apiserver--d9476cfd5--4m7gh-eth0" Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.712 [INFO][5370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:16.718031 containerd[1633]: 2026-03-12 05:13:16.715 [INFO][5363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d" Mar 12 05:13:16.723715 containerd[1633]: time="2026-03-12T05:13:16.723643764Z" level=info msg="TearDown network for sandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" successfully" Mar 12 05:13:16.728127 containerd[1633]: time="2026-03-12T05:13:16.726881316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.114838856s" Mar 12 05:13:16.728127 containerd[1633]: time="2026-03-12T05:13:16.726930772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 05:13:16.772602 containerd[1633]: time="2026-03-12T05:13:16.772526083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:16.772851 containerd[1633]: time="2026-03-12T05:13:16.772626249Z" level=info msg="RemovePodSandbox \"ce17336375ddd7f66131fb5830c9197899e70062b7337e1e560e1f3acd5c113d\" returns successfully" Mar 12 05:13:16.801585 containerd[1633]: time="2026-03-12T05:13:16.801160722Z" level=info msg="StopPodSandbox for \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\"" Mar 12 05:13:16.833944 containerd[1633]: time="2026-03-12T05:13:16.833601180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:16.937 [WARNING][5385] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c", Pod:"csi-node-driver-5bhzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif75c59934fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:16.938 [INFO][5385] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:16.938 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" iface="eth0" netns="" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:16.938 [INFO][5385] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:16.938 [INFO][5385] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.025 [INFO][5393] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.025 [INFO][5393] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.025 [INFO][5393] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.035 [WARNING][5393] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.035 [INFO][5393] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.040 [INFO][5393] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:17.048020 containerd[1633]: 2026-03-12 05:13:17.043 [INFO][5385] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.048020 containerd[1633]: time="2026-03-12T05:13:17.047124820Z" level=info msg="TearDown network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" successfully" Mar 12 05:13:17.048020 containerd[1633]: time="2026-03-12T05:13:17.047161713Z" level=info msg="StopPodSandbox for \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" returns successfully" Mar 12 05:13:17.059847 containerd[1633]: time="2026-03-12T05:13:17.059271974Z" level=info msg="RemovePodSandbox for \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\"" Mar 12 05:13:17.059847 containerd[1633]: time="2026-03-12T05:13:17.059320963Z" level=info msg="Forcibly stopping sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\"" Mar 12 05:13:17.074704 containerd[1633]: time="2026-03-12T05:13:17.074415789Z" level=info msg="CreateContainer within sandbox \"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 05:13:17.121171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589122364.mount: Deactivated successfully. Mar 12 05:13:17.128526 containerd[1633]: time="2026-03-12T05:13:17.128355071Z" level=info msg="CreateContainer within sandbox \"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dbca8874371d00114c5320ba91d8023bf1dc37e446b01920d8b82afeba36bfa4\"" Mar 12 05:13:17.131908 containerd[1633]: time="2026-03-12T05:13:17.131756135Z" level=info msg="StartContainer for \"dbca8874371d00114c5320ba91d8023bf1dc37e446b01920d8b82afeba36bfa4\"" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.163 [WARNING][5409] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e037c65-1cc8-4c93-b094-f3ed5dbdccf3", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c", Pod:"csi-node-driver-5bhzw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif75c59934fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.164 [INFO][5409] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.165 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" iface="eth0" netns="" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.165 [INFO][5409] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.165 [INFO][5409] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.215 [INFO][5417] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.215 [INFO][5417] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.215 [INFO][5417] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.241 [WARNING][5417] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.241 [INFO][5417] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" HandleID="k8s-pod-network.ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Workload="srv--ro1yv.gb1.brightbox.com-k8s-csi--node--driver--5bhzw-eth0" Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.249 [INFO][5417] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:17.257596 containerd[1633]: 2026-03-12 05:13:17.255 [INFO][5409] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851" Mar 12 05:13:17.257596 containerd[1633]: time="2026-03-12T05:13:17.257147240Z" level=info msg="TearDown network for sandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" successfully" Mar 12 05:13:17.318918 containerd[1633]: time="2026-03-12T05:13:17.318003991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:17.320642 containerd[1633]: time="2026-03-12T05:13:17.319032745Z" level=info msg="RemovePodSandbox \"ac052f3be72b1fe69e57253627c0cae615d15021d2447c375d3d0ba91f911851\" returns successfully" Mar 12 05:13:17.323599 containerd[1633]: time="2026-03-12T05:13:17.323135364Z" level=info msg="StopPodSandbox for \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\"" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.425 [WARNING][5434] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eee9e053-fb8a-4138-a624-424d81f26460", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561", Pod:"coredns-674b8bbfcf-k855v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali998b2968d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.426 [INFO][5434] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.426 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" iface="eth0" netns="" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.426 [INFO][5434] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.426 [INFO][5434] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.496 [INFO][5446] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.496 [INFO][5446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.496 [INFO][5446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.511 [WARNING][5446] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.511 [INFO][5446] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.514 [INFO][5446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:17.526199 containerd[1633]: 2026-03-12 05:13:17.521 [INFO][5434] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.526199 containerd[1633]: time="2026-03-12T05:13:17.525138396Z" level=info msg="TearDown network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" successfully" Mar 12 05:13:17.526199 containerd[1633]: time="2026-03-12T05:13:17.525176762Z" level=info msg="StopPodSandbox for \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" returns successfully" Mar 12 05:13:17.529015 containerd[1633]: time="2026-03-12T05:13:17.526857151Z" level=info msg="RemovePodSandbox for \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\"" Mar 12 05:13:17.529015 containerd[1633]: time="2026-03-12T05:13:17.526904337Z" level=info msg="Forcibly stopping sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\"" Mar 12 05:13:17.535131 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:17.532721 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:17.532744 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:17.717833 containerd[1633]: time="2026-03-12T05:13:17.717521392Z" level=info msg="StartContainer for \"dbca8874371d00114c5320ba91d8023bf1dc37e446b01920d8b82afeba36bfa4\" returns successfully" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.623 [WARNING][5460] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eee9e053-fb8a-4138-a624-424d81f26460", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"00d0887ad6d44dd5c76e5e126825cecd7765b6982d04953d208421c38a696561", Pod:"coredns-674b8bbfcf-k855v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali998b2968d08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.624 [INFO][5460] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.624 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" iface="eth0" netns="" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.625 [INFO][5460] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.625 [INFO][5460] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.720 [INFO][5482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.720 [INFO][5482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.720 [INFO][5482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.741 [WARNING][5482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.741 [INFO][5482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" HandleID="k8s-pod-network.954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k855v-eth0" Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.744 [INFO][5482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:17.753530 containerd[1633]: 2026-03-12 05:13:17.746 [INFO][5460] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e" Mar 12 05:13:17.753530 containerd[1633]: time="2026-03-12T05:13:17.750838474Z" level=info msg="TearDown network for sandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" successfully" Mar 12 05:13:17.763992 containerd[1633]: time="2026-03-12T05:13:17.763737995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:17.763992 containerd[1633]: time="2026-03-12T05:13:17.763853135Z" level=info msg="RemovePodSandbox \"954f645f383fadb150defe1a2a9e2e9adf09be8d0fa6266e39ddd6454c7f281e\" returns successfully" Mar 12 05:13:17.766341 containerd[1633]: time="2026-03-12T05:13:17.765196111Z" level=info msg="StopPodSandbox for \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\"" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.833 [WARNING][5513] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0", GenerateName:"calico-kube-controllers-74f6db979d-", Namespace:"calico-system", SelfLink:"", UID:"2bd6d5d1-effa-4aa5-8222-37da12221ee2", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f6db979d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84", Pod:"calico-kube-controllers-74f6db979d-5bgk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c0aa17a69a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.834 [INFO][5513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.834 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" iface="eth0" netns="" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.834 [INFO][5513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.834 [INFO][5513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.958 [INFO][5522] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.959 [INFO][5522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.960 [INFO][5522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.976 [WARNING][5522] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.976 [INFO][5522] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.982 [INFO][5522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:17.989967 containerd[1633]: 2026-03-12 05:13:17.984 [INFO][5513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:17.993201 containerd[1633]: time="2026-03-12T05:13:17.990632863Z" level=info msg="TearDown network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" successfully" Mar 12 05:13:17.993201 containerd[1633]: time="2026-03-12T05:13:17.990669100Z" level=info msg="StopPodSandbox for \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" returns successfully" Mar 12 05:13:17.994032 containerd[1633]: time="2026-03-12T05:13:17.993943101Z" level=info msg="RemovePodSandbox for \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\"" Mar 12 05:13:17.994032 containerd[1633]: time="2026-03-12T05:13:17.993988860Z" level=info msg="Forcibly stopping sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\"" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.073 [WARNING][5538] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0", GenerateName:"calico-kube-controllers-74f6db979d-", Namespace:"calico-system", SelfLink:"", UID:"2bd6d5d1-effa-4aa5-8222-37da12221ee2", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f6db979d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"a3ac6cd2eacb3297d566ae192825e7bd3746bce01f32ef1ea09b02801a55fc84", Pod:"calico-kube-controllers-74f6db979d-5bgk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c0aa17a69a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.073 [INFO][5538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.073 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" iface="eth0" netns="" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.073 [INFO][5538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.073 [INFO][5538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.113 [INFO][5545] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.113 [INFO][5545] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.113 [INFO][5545] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.125 [WARNING][5545] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.125 [INFO][5545] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" HandleID="k8s-pod-network.46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Workload="srv--ro1yv.gb1.brightbox.com-k8s-calico--kube--controllers--74f6db979d--5bgk7-eth0" Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.129 [INFO][5545] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:18.138330 containerd[1633]: 2026-03-12 05:13:18.134 [INFO][5538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef" Mar 12 05:13:18.138330 containerd[1633]: time="2026-03-12T05:13:18.137964927Z" level=info msg="TearDown network for sandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" successfully" Mar 12 05:13:18.144768 containerd[1633]: time="2026-03-12T05:13:18.144500574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:18.144768 containerd[1633]: time="2026-03-12T05:13:18.144653945Z" level=info msg="RemovePodSandbox \"46d91df2651d98e99b44dd6033e8870ead3edb37b3d4211e41e00eba4bd2caef\" returns successfully" Mar 12 05:13:18.146213 containerd[1633]: time="2026-03-12T05:13:18.145833179Z" level=info msg="StopPodSandbox for \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\"" Mar 12 05:13:18.267641 kubelet[2890]: I0312 05:13:18.255871 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d9476cfd5-4m7gh" podStartSLOduration=42.043634882 podStartE2EDuration="46.238712922s" podCreationTimestamp="2026-03-12 05:12:32 +0000 UTC" firstStartedPulling="2026-03-12 05:13:05.521927695 +0000 UTC m=+52.031650902" lastFinishedPulling="2026-03-12 05:13:09.71700573 +0000 UTC m=+56.226728942" observedRunningTime="2026-03-12 05:13:10.981938506 +0000 UTC m=+57.491661735" watchObservedRunningTime="2026-03-12 05:13:18.238712922 +0000 UTC m=+64.748436155" Mar 12 05:13:18.267641 kubelet[2890]: I0312 05:13:18.266902 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74f6db979d-5bgk7" podStartSLOduration=34.072360763 podStartE2EDuration="45.266882507s" podCreationTimestamp="2026-03-12 05:12:33 +0000 UTC" firstStartedPulling="2026-03-12 05:13:05.616714032 +0000 UTC m=+52.126437246" lastFinishedPulling="2026-03-12 05:13:16.811235766 +0000 UTC m=+63.320958990" observedRunningTime="2026-03-12 05:13:18.213024795 +0000 UTC m=+64.722748027" watchObservedRunningTime="2026-03-12 05:13:18.266882507 +0000 UTC m=+64.776605720" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.220 [WARNING][5559] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc68d85e-8701-44b5-913d-95c4533a5538", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6", Pod:"coredns-674b8bbfcf-ckbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c227c8a893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.222 [INFO][5559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.222 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" iface="eth0" netns="" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.222 [INFO][5559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.222 [INFO][5559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.276 [INFO][5567] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.276 [INFO][5567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.276 [INFO][5567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.287 [WARNING][5567] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.287 [INFO][5567] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.289 [INFO][5567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:18.294159 containerd[1633]: 2026-03-12 05:13:18.291 [INFO][5559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.294159 containerd[1633]: time="2026-03-12T05:13:18.293983104Z" level=info msg="TearDown network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" successfully" Mar 12 05:13:18.294159 containerd[1633]: time="2026-03-12T05:13:18.294015619Z" level=info msg="StopPodSandbox for \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" returns successfully" Mar 12 05:13:18.297947 containerd[1633]: time="2026-03-12T05:13:18.295228171Z" level=info msg="RemovePodSandbox for \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\"" Mar 12 05:13:18.297947 containerd[1633]: time="2026-03-12T05:13:18.295264841Z" level=info msg="Forcibly stopping sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\"" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.404 [WARNING][5581] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc68d85e-8701-44b5-913d-95c4533a5538", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 5, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ro1yv.gb1.brightbox.com", ContainerID:"503b27eafaa236c1fc91cef36085cc8700e77a7cfe6c0c821e058bc659470df6", Pod:"coredns-674b8bbfcf-ckbxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c227c8a893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.405 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.405 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" iface="eth0" netns="" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.405 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.405 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.467 [INFO][5589] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.467 [INFO][5589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.467 [INFO][5589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.493 [WARNING][5589] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.493 [INFO][5589] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" HandleID="k8s-pod-network.52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Workload="srv--ro1yv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ckbxf-eth0" Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.497 [INFO][5589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:18.505670 containerd[1633]: 2026-03-12 05:13:18.501 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a" Mar 12 05:13:18.507756 containerd[1633]: time="2026-03-12T05:13:18.505756333Z" level=info msg="TearDown network for sandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" successfully" Mar 12 05:13:18.510601 containerd[1633]: time="2026-03-12T05:13:18.510566236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:18.510675 containerd[1633]: time="2026-03-12T05:13:18.510631985Z" level=info msg="RemovePodSandbox \"52603834621bbd63cf07311dfa22624c430a444dc802517f3fb0c560eedf2f4a\" returns successfully" Mar 12 05:13:18.544656 containerd[1633]: time="2026-03-12T05:13:18.544486902Z" level=info msg="StopPodSandbox for \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\"" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.615 [WARNING][5603] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.615 [INFO][5603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.615 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" iface="eth0" netns="" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.615 [INFO][5603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.615 [INFO][5603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.656 [INFO][5610] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.656 [INFO][5610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.656 [INFO][5610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.667 [WARNING][5610] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.667 [INFO][5610] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.669 [INFO][5610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:18.673483 containerd[1633]: 2026-03-12 05:13:18.671 [INFO][5603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.674727 containerd[1633]: time="2026-03-12T05:13:18.673899223Z" level=info msg="TearDown network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" successfully" Mar 12 05:13:18.674727 containerd[1633]: time="2026-03-12T05:13:18.673934267Z" level=info msg="StopPodSandbox for \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" returns successfully" Mar 12 05:13:18.675408 containerd[1633]: time="2026-03-12T05:13:18.675217358Z" level=info msg="RemovePodSandbox for \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\"" Mar 12 05:13:18.675408 containerd[1633]: time="2026-03-12T05:13:18.675256850Z" level=info msg="Forcibly stopping sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\"" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.764 [WARNING][5625] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" WorkloadEndpoint="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.764 [INFO][5625] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.764 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" iface="eth0" netns="" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.764 [INFO][5625] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.764 [INFO][5625] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.950 [INFO][5632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.951 [INFO][5632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.951 [INFO][5632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.974 [WARNING][5632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.974 [INFO][5632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" HandleID="k8s-pod-network.a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Workload="srv--ro1yv.gb1.brightbox.com-k8s-whisker--f6dd97758--ssf6z-eth0" Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.976 [INFO][5632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 05:13:18.988552 containerd[1633]: 2026-03-12 05:13:18.981 [INFO][5625] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94" Mar 12 05:13:18.988552 containerd[1633]: time="2026-03-12T05:13:18.988327071Z" level=info msg="TearDown network for sandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" successfully" Mar 12 05:13:18.995402 containerd[1633]: time="2026-03-12T05:13:18.995328963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 05:13:18.995402 containerd[1633]: time="2026-03-12T05:13:18.995403913Z" level=info msg="RemovePodSandbox \"a8014994eb55fb2eecd3d223655721d3035452ae9376cf85c15128c685188e94\" returns successfully" Mar 12 05:13:19.586782 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:19.581660 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:19.581703 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:20.384365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739035807.mount: Deactivated successfully. Mar 12 05:13:21.181375 containerd[1633]: time="2026-03-12T05:13:21.181255748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:21.258157 containerd[1633]: time="2026-03-12T05:13:21.182960052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 05:13:21.271753 containerd[1633]: time="2026-03-12T05:13:21.271018921Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:21.275182 containerd[1633]: time="2026-03-12T05:13:21.275083404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:21.336043 containerd[1633]: time="2026-03-12T05:13:21.334977833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.448064258s" Mar 12 05:13:21.336043 containerd[1633]: time="2026-03-12T05:13:21.335127169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 05:13:21.405806 containerd[1633]: time="2026-03-12T05:13:21.405723780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 05:13:21.437522 containerd[1633]: time="2026-03-12T05:13:21.435837755Z" level=info msg="CreateContainer within sandbox \"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 05:13:21.481480 containerd[1633]: time="2026-03-12T05:13:21.481406459Z" level=info msg="CreateContainer within sandbox \"95dc2efa582c9bd021dec79e69d6a6ca9c86c3054af68506d205f3d629f87b58\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0ada49de67a215d3d6b0f27e98f42cba911f71d067e5a33ca539224f55d60dc7\"" Mar 12 05:13:21.483774 containerd[1633]: time="2026-03-12T05:13:21.483733833Z" level=info msg="StartContainer for \"0ada49de67a215d3d6b0f27e98f42cba911f71d067e5a33ca539224f55d60dc7\"" Mar 12 05:13:21.630728 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:21.628814 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:21.628851 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:21.687156 containerd[1633]: time="2026-03-12T05:13:21.684844549Z" level=info msg="StartContainer for \"0ada49de67a215d3d6b0f27e98f42cba911f71d067e5a33ca539224f55d60dc7\" returns successfully" Mar 12 05:13:21.871862 containerd[1633]: time="2026-03-12T05:13:21.871699275Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:21.876548 containerd[1633]: time="2026-03-12T05:13:21.875460115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 05:13:21.879940 containerd[1633]: time="2026-03-12T05:13:21.879845906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 472.283337ms" Mar 12 05:13:21.880172 containerd[1633]: time="2026-03-12T05:13:21.880142946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 05:13:21.881951 containerd[1633]: time="2026-03-12T05:13:21.881920751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 05:13:21.890433 containerd[1633]: time="2026-03-12T05:13:21.890337393Z" level=info msg="CreateContainer within sandbox \"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 05:13:21.909277 containerd[1633]: time="2026-03-12T05:13:21.909228850Z" level=info msg="CreateContainer within sandbox \"6faad9427bab0fcb0b6495469f34d335d8acaa19942876fe4f77bc8d41fe6610\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"684666d3fe3b6600d0197294ed3df1275c258711293176564366e94cfde692ee\"" Mar 12 05:13:21.910281 containerd[1633]: time="2026-03-12T05:13:21.910247850Z" level=info msg="StartContainer for \"684666d3fe3b6600d0197294ed3df1275c258711293176564366e94cfde692ee\"" Mar 12 05:13:22.033550 containerd[1633]: time="2026-03-12T05:13:22.033364254Z" level=info msg="StartContainer for \"684666d3fe3b6600d0197294ed3df1275c258711293176564366e94cfde692ee\" returns successfully" Mar 12 05:13:22.573787 kubelet[2890]: I0312 05:13:22.573376 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-wp76k" podStartSLOduration=34.802882768 podStartE2EDuration="50.572378492s" podCreationTimestamp="2026-03-12 05:12:32 +0000 UTC" firstStartedPulling="2026-03-12 05:13:05.616753514 +0000 UTC m=+52.126476727" lastFinishedPulling="2026-03-12 05:13:21.386249227 +0000 UTC m=+67.895972451" observedRunningTime="2026-03-12 05:13:22.568634433 +0000 UTC m=+69.078357672" watchObservedRunningTime="2026-03-12 05:13:22.572378492 +0000 UTC m=+69.082101720" Mar 12 05:13:23.442680 kubelet[2890]: I0312 05:13:23.442029 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:13:23.680961 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:23.679024 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:23.679075 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:23.784416 containerd[1633]: time="2026-03-12T05:13:23.783349227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:23.788067 containerd[1633]: time="2026-03-12T05:13:23.786633336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 05:13:23.788471 containerd[1633]: time="2026-03-12T05:13:23.788264947Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:23.807236 containerd[1633]: time="2026-03-12T05:13:23.806311382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:23.808360 containerd[1633]: time="2026-03-12T05:13:23.808288829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.925725193s" Mar 12 05:13:23.808687 containerd[1633]: time="2026-03-12T05:13:23.808432384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 05:13:23.813589 containerd[1633]: time="2026-03-12T05:13:23.812829530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 05:13:23.828429 containerd[1633]: time="2026-03-12T05:13:23.828190008Z" level=info msg="CreateContainer within sandbox \"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 05:13:23.854282 containerd[1633]: time="2026-03-12T05:13:23.854123954Z" level=info msg="CreateContainer within sandbox \"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"72634e2f7bdde27bd63dd47c0a843590cf2ba2fb83a0af62c4c90e02eb1935ce\"" Mar 12 05:13:23.874926 containerd[1633]: time="2026-03-12T05:13:23.874876742Z" level=info msg="StartContainer for \"72634e2f7bdde27bd63dd47c0a843590cf2ba2fb83a0af62c4c90e02eb1935ce\"" Mar 12 05:13:24.069066 containerd[1633]: time="2026-03-12T05:13:24.068903367Z" level=info msg="StartContainer for \"72634e2f7bdde27bd63dd47c0a843590cf2ba2fb83a0af62c4c90e02eb1935ce\" returns successfully" Mar 12 05:13:25.919060 containerd[1633]: time="2026-03-12T05:13:25.918868511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:25.920465 containerd[1633]: time="2026-03-12T05:13:25.920293195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 05:13:25.922447 containerd[1633]: time="2026-03-12T05:13:25.922413524Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:25.925552 containerd[1633]: time="2026-03-12T05:13:25.925460364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:25.926701 containerd[1633]: time="2026-03-12T05:13:25.926661358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.113773119s" Mar 12 05:13:25.927398 containerd[1633]: time="2026-03-12T05:13:25.926706502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 05:13:25.929519 containerd[1633]: time="2026-03-12T05:13:25.929142741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 05:13:25.941535 containerd[1633]: time="2026-03-12T05:13:25.941462115Z" level=info msg="CreateContainer within sandbox \"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 05:13:25.968181 containerd[1633]: time="2026-03-12T05:13:25.967676613Z" level=info msg="CreateContainer within sandbox \"a6c421c9e3efa94295a1903e340390ce54b44a077c1bebef81257238f1a2dd3c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a6f02f59aabb593ac5b016db2a1947d79c7fc04431bd104691743943a3bd508\"" Mar 12 05:13:25.972582 containerd[1633]: time="2026-03-12T05:13:25.972361276Z" level=info msg="StartContainer for \"1a6f02f59aabb593ac5b016db2a1947d79c7fc04431bd104691743943a3bd508\"" Mar 12 05:13:26.133696 containerd[1633]: time="2026-03-12T05:13:26.133640114Z" level=info msg="StartContainer for \"1a6f02f59aabb593ac5b016db2a1947d79c7fc04431bd104691743943a3bd508\" returns successfully" Mar 12 05:13:26.539387 kubelet[2890]: I0312 05:13:26.539273 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d9476cfd5-frq46" podStartSLOduration=38.274665621 podStartE2EDuration="54.539241273s" podCreationTimestamp="2026-03-12 05:12:32 +0000 UTC" firstStartedPulling="2026-03-12 05:13:05.616824193 +0000 UTC m=+52.126547413" lastFinishedPulling="2026-03-12 05:13:21.881399852 +0000 UTC m=+68.391123065" observedRunningTime="2026-03-12 05:13:22.621936665 +0000 UTC m=+69.131659890" watchObservedRunningTime="2026-03-12 05:13:26.539241273 +0000 UTC m=+73.048964479" Mar 12 05:13:27.226699 kubelet[2890]: I0312 05:13:27.225604 2890 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 05:13:27.229073 kubelet[2890]: I0312 05:13:27.229018 2890 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 05:13:28.317054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738475707.mount: Deactivated successfully. Mar 12 05:13:28.371065 containerd[1633]: time="2026-03-12T05:13:28.370926824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:28.373327 containerd[1633]: time="2026-03-12T05:13:28.373214363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 05:13:28.374722 containerd[1633]: time="2026-03-12T05:13:28.374679648Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:28.379031 containerd[1633]: time="2026-03-12T05:13:28.378520508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:13:28.382113 containerd[1633]: time="2026-03-12T05:13:28.381098218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.451908903s" Mar 12 05:13:28.382113 containerd[1633]: time="2026-03-12T05:13:28.381620485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 05:13:28.389010 containerd[1633]: time="2026-03-12T05:13:28.388942123Z" level=info msg="CreateContainer within sandbox \"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 05:13:28.423257 containerd[1633]: time="2026-03-12T05:13:28.423198008Z" level=info msg="CreateContainer within sandbox \"5112c1e064c665a885c2a864ac3ad68848eaf7f3fcd932b688c95a5493ec081c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"758c0c805e6b60ee20c2963edfa9f9e683bf1f55bfa28459a474051c95fcd467\"" Mar 12 05:13:28.424538 containerd[1633]: time="2026-03-12T05:13:28.424155107Z" level=info msg="StartContainer for \"758c0c805e6b60ee20c2963edfa9f9e683bf1f55bfa28459a474051c95fcd467\"" Mar 12 05:13:28.634864 containerd[1633]: time="2026-03-12T05:13:28.632824228Z" level=info msg="StartContainer for \"758c0c805e6b60ee20c2963edfa9f9e683bf1f55bfa28459a474051c95fcd467\" returns successfully" Mar 12 05:13:29.546535 kubelet[2890]: I0312 05:13:29.543166 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5bhzw" podStartSLOduration=36.230444998 podStartE2EDuration="56.543122422s" podCreationTimestamp="2026-03-12 05:12:33 +0000 UTC" firstStartedPulling="2026-03-12 05:13:05.616174221 +0000 UTC m=+52.125897427" lastFinishedPulling="2026-03-12 05:13:25.928851628 +0000 UTC m=+72.438574851" observedRunningTime="2026-03-12 05:13:26.542137664 +0000 UTC m=+73.051860907" watchObservedRunningTime="2026-03-12 05:13:29.543122422 +0000 UTC m=+76.052845681" Mar 12 05:13:29.902969 systemd[1]: Started sshd@10-10.230.44.138:22-20.161.92.111:48528.service - OpenSSH per-connection server daemon (20.161.92.111:48528). Mar 12 05:13:30.573563 sshd[5987]: Accepted publickey for core from 20.161.92.111 port 48528 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:30.576001 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:30.602893 systemd-logind[1604]: New session 12 of user core. Mar 12 05:13:30.617178 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 05:13:31.687316 sshd[5987]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:31.705636 systemd[1]: sshd@10-10.230.44.138:22-20.161.92.111:48528.service: Deactivated successfully. Mar 12 05:13:31.732624 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 05:13:31.738834 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Mar 12 05:13:31.751650 systemd-logind[1604]: Removed session 12. Mar 12 05:13:31.880417 kubelet[2890]: I0312 05:13:31.877958 2890 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f9c5dd5f-h6rph" podStartSLOduration=8.498859659 podStartE2EDuration="30.877904146s" podCreationTimestamp="2026-03-12 05:13:01 +0000 UTC" firstStartedPulling="2026-03-12 05:13:06.004778506 +0000 UTC m=+52.514501720" lastFinishedPulling="2026-03-12 05:13:28.383822987 +0000 UTC m=+74.893546207" observedRunningTime="2026-03-12 05:13:29.544910326 +0000 UTC m=+76.054633559" watchObservedRunningTime="2026-03-12 05:13:31.877904146 +0000 UTC m=+78.387627366" Mar 12 05:13:36.789227 systemd[1]: Started sshd@11-10.230.44.138:22-20.161.92.111:46914.service - OpenSSH per-connection server daemon (20.161.92.111:46914). Mar 12 05:13:37.374905 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:13:37.374410 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:13:37.374446 systemd-resolved[1514]: Flushed all caches. Mar 12 05:13:37.424089 sshd[6028]: Accepted publickey for core from 20.161.92.111 port 46914 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:37.428115 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:37.436973 systemd-logind[1604]: New session 13 of user core. Mar 12 05:13:37.441985 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 05:13:38.145415 sshd[6028]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:38.151030 systemd[1]: sshd@11-10.230.44.138:22-20.161.92.111:46914.service: Deactivated successfully. Mar 12 05:13:38.156177 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Mar 12 05:13:38.157032 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 05:13:38.159759 systemd-logind[1604]: Removed session 13. Mar 12 05:13:42.167940 kubelet[2890]: I0312 05:13:42.161087 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:13:43.247084 systemd[1]: Started sshd@12-10.230.44.138:22-20.161.92.111:53264.service - OpenSSH per-connection server daemon (20.161.92.111:53264). Mar 12 05:13:43.853993 sshd[6046]: Accepted publickey for core from 20.161.92.111 port 53264 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:43.857540 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:43.868844 systemd-logind[1604]: New session 14 of user core. Mar 12 05:13:43.877951 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 05:13:44.488595 sshd[6046]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:44.493926 systemd[1]: sshd@12-10.230.44.138:22-20.161.92.111:53264.service: Deactivated successfully. Mar 12 05:13:44.501201 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Mar 12 05:13:44.501951 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 05:13:44.505321 systemd-logind[1604]: Removed session 14. Mar 12 05:13:45.392577 kubelet[2890]: I0312 05:13:45.391778 2890 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 05:13:49.580916 systemd[1]: Started sshd@13-10.230.44.138:22-20.161.92.111:53278.service - OpenSSH per-connection server daemon (20.161.92.111:53278). Mar 12 05:13:50.182646 sshd[6098]: Accepted publickey for core from 20.161.92.111 port 53278 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:50.188099 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:50.197976 systemd-logind[1604]: New session 15 of user core. Mar 12 05:13:50.204134 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 05:13:50.851721 sshd[6098]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:50.861288 systemd[1]: sshd@13-10.230.44.138:22-20.161.92.111:53278.service: Deactivated successfully. Mar 12 05:13:50.865714 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Mar 12 05:13:50.866561 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 05:13:50.873032 systemd-logind[1604]: Removed session 15. Mar 12 05:13:55.951862 systemd[1]: Started sshd@14-10.230.44.138:22-20.161.92.111:47414.service - OpenSSH per-connection server daemon (20.161.92.111:47414). Mar 12 05:13:56.566003 sshd[6152]: Accepted publickey for core from 20.161.92.111 port 47414 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:56.568852 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:56.581370 systemd-logind[1604]: New session 16 of user core. Mar 12 05:13:56.586935 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 05:13:57.280177 sshd[6152]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:57.289598 systemd[1]: sshd@14-10.230.44.138:22-20.161.92.111:47414.service: Deactivated successfully. Mar 12 05:13:57.295035 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 05:13:57.298857 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Mar 12 05:13:57.301246 systemd-logind[1604]: Removed session 16. Mar 12 05:13:57.381719 systemd[1]: Started sshd@15-10.230.44.138:22-20.161.92.111:47424.service - OpenSSH per-connection server daemon (20.161.92.111:47424). Mar 12 05:13:57.953643 sshd[6166]: Accepted publickey for core from 20.161.92.111 port 47424 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:57.956700 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:57.963950 systemd-logind[1604]: New session 17 of user core. Mar 12 05:13:57.970938 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 05:13:58.571271 sshd[6166]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:58.583445 systemd[1]: sshd@15-10.230.44.138:22-20.161.92.111:47424.service: Deactivated successfully. Mar 12 05:13:58.592214 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Mar 12 05:13:58.595805 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 05:13:58.598177 systemd-logind[1604]: Removed session 17. Mar 12 05:13:58.665960 systemd[1]: Started sshd@16-10.230.44.138:22-20.161.92.111:47440.service - OpenSSH per-connection server daemon (20.161.92.111:47440). Mar 12 05:13:59.223894 sshd[6178]: Accepted publickey for core from 20.161.92.111 port 47440 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:13:59.229116 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:13:59.240673 systemd-logind[1604]: New session 18 of user core. Mar 12 05:13:59.247024 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 05:13:59.893671 sshd[6178]: pam_unix(sshd:session): session closed for user core Mar 12 05:13:59.900229 systemd[1]: sshd@16-10.230.44.138:22-20.161.92.111:47440.service: Deactivated successfully. Mar 12 05:13:59.916220 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 05:13:59.917138 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Mar 12 05:13:59.921814 systemd-logind[1604]: Removed session 18. Mar 12 05:14:04.993970 systemd[1]: Started sshd@17-10.230.44.138:22-20.161.92.111:35468.service - OpenSSH per-connection server daemon (20.161.92.111:35468). Mar 12 05:14:05.583641 sshd[6231]: Accepted publickey for core from 20.161.92.111 port 35468 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:05.588528 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:05.600409 systemd-logind[1604]: New session 19 of user core. Mar 12 05:14:05.605970 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 05:14:06.375963 sshd[6231]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:06.385164 systemd[1]: sshd@17-10.230.44.138:22-20.161.92.111:35468.service: Deactivated successfully. Mar 12 05:14:06.391589 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 05:14:06.393221 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Mar 12 05:14:06.395265 systemd-logind[1604]: Removed session 19. Mar 12 05:14:06.468847 systemd[1]: Started sshd@18-10.230.44.138:22-20.161.92.111:35476.service - OpenSSH per-connection server daemon (20.161.92.111:35476). Mar 12 05:14:07.032344 sshd[6245]: Accepted publickey for core from 20.161.92.111 port 35476 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:07.036282 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:07.044175 systemd-logind[1604]: New session 20 of user core. Mar 12 05:14:07.050212 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 05:14:07.827438 sshd[6245]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:07.833894 systemd[1]: sshd@18-10.230.44.138:22-20.161.92.111:35476.service: Deactivated successfully. Mar 12 05:14:07.841014 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 05:14:07.841069 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Mar 12 05:14:07.845371 systemd-logind[1604]: Removed session 20. Mar 12 05:14:07.926378 systemd[1]: Started sshd@19-10.230.44.138:22-20.161.92.111:35480.service - OpenSSH per-connection server daemon (20.161.92.111:35480). Mar 12 05:14:08.489138 sshd[6257]: Accepted publickey for core from 20.161.92.111 port 35480 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:08.491539 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:08.500037 systemd-logind[1604]: New session 21 of user core. Mar 12 05:14:08.507119 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 05:14:09.436722 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:14:09.441838 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:14:09.436745 systemd-resolved[1514]: Flushed all caches. Mar 12 05:14:09.924933 sshd[6257]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:09.943416 systemd[1]: sshd@19-10.230.44.138:22-20.161.92.111:35480.service: Deactivated successfully. Mar 12 05:14:09.960672 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Mar 12 05:14:09.962072 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 05:14:09.966374 systemd-logind[1604]: Removed session 21. Mar 12 05:14:10.015914 systemd[1]: Started sshd@20-10.230.44.138:22-20.161.92.111:35486.service - OpenSSH per-connection server daemon (20.161.92.111:35486). Mar 12 05:14:10.604560 sshd[6284]: Accepted publickey for core from 20.161.92.111 port 35486 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:10.606282 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:10.614469 systemd-logind[1604]: New session 22 of user core. Mar 12 05:14:10.626088 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 05:14:11.486526 systemd-journald[1180]: Under memory pressure, flushing caches. Mar 12 05:14:11.486572 systemd-resolved[1514]: Under memory pressure, flushing caches. Mar 12 05:14:11.486583 systemd-resolved[1514]: Flushed all caches. Mar 12 05:14:11.909189 sshd[6284]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:11.915678 systemd[1]: sshd@20-10.230.44.138:22-20.161.92.111:35486.service: Deactivated successfully. Mar 12 05:14:11.923528 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 05:14:11.924710 systemd-logind[1604]: Session 22 logged out. Waiting for processes to exit. Mar 12 05:14:11.928439 systemd-logind[1604]: Removed session 22. Mar 12 05:14:12.003375 systemd[1]: Started sshd@21-10.230.44.138:22-20.161.92.111:38494.service - OpenSSH per-connection server daemon (20.161.92.111:38494). Mar 12 05:14:12.577430 sshd[6296]: Accepted publickey for core from 20.161.92.111 port 38494 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:12.580410 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:12.590500 systemd-logind[1604]: New session 23 of user core. Mar 12 05:14:12.598038 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 05:14:13.162286 sshd[6296]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:13.167636 systemd[1]: sshd@21-10.230.44.138:22-20.161.92.111:38494.service: Deactivated successfully. Mar 12 05:14:13.174017 systemd-logind[1604]: Session 23 logged out. Waiting for processes to exit. Mar 12 05:14:13.175246 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 05:14:13.176855 systemd-logind[1604]: Removed session 23. Mar 12 05:14:18.261862 systemd[1]: Started sshd@22-10.230.44.138:22-20.161.92.111:38500.service - OpenSSH per-connection server daemon (20.161.92.111:38500). Mar 12 05:14:18.856425 sshd[6333]: Accepted publickey for core from 20.161.92.111 port 38500 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:18.861075 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:18.869205 systemd-logind[1604]: New session 24 of user core. Mar 12 05:14:18.874915 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 05:14:19.535117 sshd[6333]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:19.541703 systemd-logind[1604]: Session 24 logged out. Waiting for processes to exit. Mar 12 05:14:19.542262 systemd[1]: sshd@22-10.230.44.138:22-20.161.92.111:38500.service: Deactivated successfully. Mar 12 05:14:19.550424 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 05:14:19.554727 systemd-logind[1604]: Removed session 24. Mar 12 05:14:24.631362 systemd[1]: Started sshd@23-10.230.44.138:22-20.161.92.111:51352.service - OpenSSH per-connection server daemon (20.161.92.111:51352). Mar 12 05:14:25.217993 sshd[6388]: Accepted publickey for core from 20.161.92.111 port 51352 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:14:25.221398 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:14:25.229410 systemd-logind[1604]: New session 25 of user core. Mar 12 05:14:25.234643 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 05:14:25.986870 sshd[6388]: pam_unix(sshd:session): session closed for user core Mar 12 05:14:25.991721 systemd[1]: sshd@23-10.230.44.138:22-20.161.92.111:51352.service: Deactivated successfully. Mar 12 05:14:25.997609 systemd-logind[1604]: Session 25 logged out. Waiting for processes to exit. Mar 12 05:14:25.998887 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 05:14:26.002962 systemd-logind[1604]: Removed session 25.