Mar 7 01:45:12.063994 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:45:12.064035 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:45:12.064050 kernel: BIOS-provided physical RAM map: Mar 7 01:45:12.064066 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:45:12.064077 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:45:12.064087 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:45:12.064098 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 7 01:45:12.064109 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 7 01:45:12.064120 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:45:12.064130 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:45:12.064141 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:45:12.064151 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:45:12.064175 kernel: NX (Execute Disable) protection: active Mar 7 01:45:12.064199 kernel: APIC: Static calls initialized Mar 7 01:45:12.064213 kernel: SMBIOS 2.8 present. Mar 7 01:45:12.064230 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 7 01:45:12.064243 kernel: Hypervisor detected: KVM Mar 7 01:45:12.064261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:45:12.064273 kernel: kvm-clock: using sched offset of 5282212531 cycles Mar 7 01:45:12.064285 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:45:12.064297 kernel: tsc: Detected 2499.998 MHz processor Mar 7 01:45:12.064309 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:45:12.064321 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:45:12.064333 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 7 01:45:12.064345 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:45:12.064356 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:45:12.064373 kernel: Using GB pages for direct mapping Mar 7 01:45:12.064385 kernel: ACPI: Early table checksum verification disabled Mar 7 01:45:12.064397 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 7 01:45:12.064408 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064420 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064431 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064464 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 7 01:45:12.064478 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064490 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064509 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064521 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:45:12.064533 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 7 01:45:12.064544 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 7 01:45:12.064556 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 7 01:45:12.064575 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 7 01:45:12.064588 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 7 01:45:12.064605 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 7 01:45:12.064618 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 7 01:45:12.064636 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:45:12.064649 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:45:12.064662 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 7 01:45:12.064674 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 7 01:45:12.064685 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 7 01:45:12.064697 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 7 01:45:12.064716 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 7 01:45:12.064728 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 7 01:45:12.064740 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 7 01:45:12.064752 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 7 01:45:12.064764 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 7 01:45:12.064775 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 7 01:45:12.064787 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 7 01:45:12.064799 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 7 01:45:12.064817 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 7 01:45:12.064836 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 7 01:45:12.064848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 7 01:45:12.064860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 7 01:45:12.064873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 7 01:45:12.064885 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 7 01:45:12.064898 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 7 01:45:12.064911 kernel: Zone ranges: Mar 7 01:45:12.064924 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:45:12.064936 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 7 01:45:12.064953 kernel: Normal empty Mar 7 01:45:12.064965 kernel: Movable zone start for each node Mar 7 01:45:12.064978 kernel: Early memory node ranges Mar 7 01:45:12.064990 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:45:12.065002 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 7 01:45:12.065014 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 7 01:45:12.065026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:45:12.065038 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:45:12.065056 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 7 01:45:12.065069 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:45:12.065088 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:45:12.065101 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:45:12.065113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:45:12.065125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:45:12.065138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:45:12.065150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:45:12.065162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:45:12.065174 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:45:12.065198 kernel: TSC deadline timer available Mar 7 01:45:12.065219 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 7 01:45:12.065232 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:45:12.065244 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:45:12.065256 kernel: Booting paravirtualized kernel on KVM Mar 7 01:45:12.065268 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:45:12.065281 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 7 01:45:12.065293 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 7 01:45:12.065305 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 7 01:45:12.065317 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 7 01:45:12.065335 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:45:12.065347 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:45:12.065360 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:45:12.065373 kernel: random: crng init done Mar 7 01:45:12.065385 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:45:12.065398 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:45:12.065410 kernel: Fallback order for Node 0: 0 Mar 7 01:45:12.065422 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 7 01:45:12.065440 kernel: Policy zone: DMA32 Mar 7 01:45:12.067492 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:45:12.067507 kernel: software IO TLB: area num 16. Mar 7 01:45:12.067521 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194764K reserved, 0K cma-reserved) Mar 7 01:45:12.067534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 7 01:45:12.067546 kernel: Kernel/User page tables isolation: enabled Mar 7 01:45:12.067559 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:45:12.067571 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:45:12.067583 kernel: Dynamic Preempt: voluntary Mar 7 01:45:12.067604 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:45:12.067618 kernel: rcu: RCU event tracing is enabled. Mar 7 01:45:12.067630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 7 01:45:12.067643 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:45:12.067656 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:45:12.067682 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:45:12.067700 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:45:12.067713 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 7 01:45:12.067726 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 7 01:45:12.067739 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:45:12.067752 kernel: Console: colour VGA+ 80x25 Mar 7 01:45:12.067765 kernel: printk: console [tty0] enabled Mar 7 01:45:12.067783 kernel: printk: console [ttyS0] enabled Mar 7 01:45:12.067796 kernel: ACPI: Core revision 20230628 Mar 7 01:45:12.067809 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:45:12.067822 kernel: x2apic enabled Mar 7 01:45:12.067835 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:45:12.067853 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:45:12.067872 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 7 01:45:12.067887 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:45:12.067900 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 7 01:45:12.067913 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 7 01:45:12.067925 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:45:12.067938 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:45:12.067951 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:45:12.067964 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:45:12.067976 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:45:12.067995 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:45:12.068008 kernel: MDS: Mitigation: Clear CPU buffers Mar 7 01:45:12.068021 kernel: MMIO Stale Data: Unknown: No mitigations Mar 7 01:45:12.068033 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 7 01:45:12.068046 kernel: active return thunk: its_return_thunk Mar 7 01:45:12.068059 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:45:12.068072 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:45:12.068084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:45:12.068097 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:45:12.068110 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:45:12.068128 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 7 01:45:12.068146 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:45:12.068160 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:45:12.068173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:45:12.068197 kernel: landlock: Up and running. Mar 7 01:45:12.068212 kernel: SELinux: Initializing. Mar 7 01:45:12.068225 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:45:12.068237 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:45:12.068250 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 7 01:45:12.068263 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 7 01:45:12.068276 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 7 01:45:12.068295 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 7 01:45:12.068309 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 7 01:45:12.068322 kernel: signal: max sigframe size: 1776 Mar 7 01:45:12.068335 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:45:12.068348 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:45:12.068361 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:45:12.068374 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:45:12.068387 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:45:12.068400 kernel: .... node #0, CPUs: #1 Mar 7 01:45:12.068418 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 7 01:45:12.068431 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:45:12.068460 kernel: smpboot: Max logical packages: 16 Mar 7 01:45:12.068476 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 7 01:45:12.068489 kernel: devtmpfs: initialized Mar 7 01:45:12.068502 kernel: x86/mm: Memory block size: 128MB Mar 7 01:45:12.068515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:45:12.068528 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 7 01:45:12.068541 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:45:12.068561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:45:12.068575 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:45:12.068588 kernel: audit: type=2000 audit(1772847910.790:1): state=initialized audit_enabled=0 res=1 Mar 7 01:45:12.068600 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:45:12.068613 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:45:12.068626 kernel: cpuidle: using governor menu Mar 7 01:45:12.068639 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:45:12.068652 kernel: dca service started, version 1.12.1 Mar 7 01:45:12.068665 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:45:12.068683 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:45:12.068697 kernel: PCI: Using configuration type 1 for base access Mar 7 01:45:12.068710 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:45:12.068723 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:45:12.068736 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:45:12.068749 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:45:12.068761 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:45:12.068774 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:45:12.068787 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:45:12.068805 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:45:12.068818 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:45:12.068831 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:45:12.068844 kernel: ACPI: Interpreter enabled Mar 7 01:45:12.068856 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:45:12.068869 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:45:12.068882 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:45:12.068895 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:45:12.068907 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:45:12.068926 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:45:12.069270 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:45:12.070183 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 01:45:12.070396 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 01:45:12.070417 kernel: PCI host bridge to bus 0000:00 Mar 7 01:45:12.070645 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:45:12.070823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:45:12.071005 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:45:12.071172 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:45:12.071359 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:45:12.072365 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 7 01:45:12.072644 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:45:12.072888 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:45:12.073124 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 7 01:45:12.073338 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 7 01:45:12.073584 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 7 01:45:12.073772 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 7 01:45:12.073957 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:45:12.074167 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.074764 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 7 01:45:12.074997 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.075201 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 7 01:45:12.075431 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.075671 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 7 01:45:12.075906 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.076094 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 7 01:45:12.076337 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.076542 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 7 01:45:12.076750 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.076940 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 7 01:45:12.077149 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.077351 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 7 01:45:12.077596 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 01:45:12.077791 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 7 01:45:12.078019 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:45:12.078224 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:45:12.078416 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 7 01:45:12.079035 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 7 01:45:12.079239 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 7 01:45:12.079479 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:45:12.079671 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:45:12.079857 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 7 01:45:12.080041 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 7 01:45:12.080254 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:45:12.080827 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:45:12.081080 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:45:12.081321 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 7 01:45:12.081593 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 7 01:45:12.081793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:45:12.081982 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:45:12.083683 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 7 01:45:12.083887 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 7 01:45:12.084091 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 7 01:45:12.084292 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 7 01:45:12.089609 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 7 01:45:12.089867 kernel: pci_bus 0000:02: extended config space not accessible Mar 7 01:45:12.090096 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 7 01:45:12.090319 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 7 01:45:12.090560 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 7 01:45:12.090758 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 7 01:45:12.090995 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 01:45:12.091207 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 7 01:45:12.091404 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 7 01:45:12.093672 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 7 01:45:12.093878 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 7 01:45:12.094111 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 01:45:12.094345 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 7 01:45:12.096280 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 7 01:45:12.097542 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 7 01:45:12.097754 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 7 01:45:12.097954 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 7 01:45:12.098143 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 7 01:45:12.098344 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 7 01:45:12.099604 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 7 01:45:12.099801 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 7 01:45:12.100014 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 7 01:45:12.100225 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 7 01:45:12.100417 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 7 01:45:12.100627 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 7 01:45:12.100823 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 7 01:45:12.101011 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 7 01:45:12.101222 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 7 01:45:12.101419 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 7 01:45:12.103688 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 7 01:45:12.103890 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 7 01:45:12.103912 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:45:12.103926 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:45:12.103940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:45:12.103954 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:45:12.103967 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:45:12.103990 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:45:12.104003 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:45:12.104017 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:45:12.104030 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:45:12.104044 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:45:12.104057 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:45:12.104071 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:45:12.104084 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:45:12.104097 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:45:12.104116 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:45:12.104129 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:45:12.104143 kernel: iommu: Default domain type: Translated Mar 7 01:45:12.104156 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:45:12.104169 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:45:12.104183 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:45:12.104209 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:45:12.104222 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 7 01:45:12.104420 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:45:12.104628 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:45:12.104814 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:45:12.104835 kernel: vgaarb: loaded Mar 7 01:45:12.104848 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:45:12.104875 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:45:12.104890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:45:12.104903 kernel: pnp: PnP ACPI init Mar 7 01:45:12.105127 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:45:12.105159 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:45:12.105173 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:45:12.105197 kernel: NET: Registered PF_INET protocol family Mar 7 01:45:12.105212 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:45:12.105226 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:45:12.105240 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:45:12.105253 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:45:12.105266 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:45:12.105286 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:45:12.105300 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:45:12.105313 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:45:12.105327 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:45:12.105340 kernel: NET: Registered PF_XDP protocol family Mar 7 01:45:12.107584 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 7 01:45:12.107794 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 7 01:45:12.107994 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 7 01:45:12.108221 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 7 01:45:12.108419 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 7 01:45:12.109659 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 01:45:12.109856 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 01:45:12.110046 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 01:45:12.110250 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 01:45:12.111480 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 01:45:12.111690 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 01:45:12.111899 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 7 01:45:12.112096 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 7 01:45:12.112303 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 7 01:45:12.112528 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 7 01:45:12.112715 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 7 01:45:12.112912 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 7 01:45:12.113139 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 7 01:45:12.113341 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 7 01:45:12.115588 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 7 01:45:12.115787 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 7 01:45:12.115978 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 7 01:45:12.116169 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 7 01:45:12.116371 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 7 01:45:12.118614 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 7 01:45:12.118825 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 7 01:45:12.119018 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 7 01:45:12.119230 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 7 01:45:12.119418 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 7 01:45:12.119654 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 7 01:45:12.119844 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 7 01:45:12.120040 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 7 01:45:12.120241 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 7 01:45:12.120427 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 7 01:45:12.126195 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 7 01:45:12.126415 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 7 01:45:12.126682 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 7 01:45:12.126871 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 7 01:45:12.127064 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 7 01:45:12.127267 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 7 01:45:12.127492 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 7 01:45:12.127679 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 7 01:45:12.127874 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 7 01:45:12.128060 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 7 01:45:12.128261 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 7 01:45:12.128493 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 7 01:45:12.128692 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 7 01:45:12.128885 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 7 01:45:12.129077 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 7 01:45:12.129285 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 7 01:45:12.129496 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:45:12.129668 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:45:12.129835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:45:12.130006 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:45:12.130200 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:45:12.130376 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 7 01:45:12.131647 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 7 01:45:12.131829 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 7 01:45:12.132004 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 7 01:45:12.132209 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 7 01:45:12.132427 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 7 01:45:12.135912 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 7 01:45:12.136093 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 7 01:45:12.136304 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 7 01:45:12.136531 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 7 01:45:12.136712 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 7 01:45:12.136917 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 7 01:45:12.137109 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 7 01:45:12.137303 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 7 01:45:12.137560 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 7 01:45:12.137742 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 7 01:45:12.137920 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 7 01:45:12.138146 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 7 01:45:12.138341 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 7 01:45:12.138576 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 7 01:45:12.138796 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 7 01:45:12.138994 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 7 01:45:12.139173 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 7 01:45:12.139388 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 7 01:45:12.139595 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 7 01:45:12.139769 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 7 01:45:12.139799 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:45:12.139815 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:45:12.139830 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:45:12.139844 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 7 01:45:12.139858 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:45:12.139872 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 7 01:45:12.139886 kernel: Initialise system trusted keyrings Mar 7 01:45:12.139901 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:45:12.139920 kernel: Key type asymmetric registered Mar 7 01:45:12.139935 kernel: Asymmetric key parser 'x509' registered Mar 7 01:45:12.139949 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:45:12.139963 kernel: io scheduler mq-deadline registered Mar 7 01:45:12.139977 kernel: io scheduler kyber registered Mar 7 01:45:12.139990 kernel: io scheduler bfq registered Mar 7 01:45:12.140219 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 7 01:45:12.140412 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 7 01:45:12.142658 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.142872 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 7 01:45:12.143062 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 7 01:45:12.143264 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.144493 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 7 01:45:12.144693 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 7 01:45:12.144882 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.145086 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 7 01:45:12.145287 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 7 01:45:12.146506 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.146717 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 7 01:45:12.146905 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 7 01:45:12.147095 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.147308 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 7 01:45:12.147526 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 7 01:45:12.147712 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.147901 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 7 01:45:12.148086 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 7 01:45:12.148284 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.150525 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 7 01:45:12.150722 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 7 01:45:12.150911 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 01:45:12.150934 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:45:12.150950 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:45:12.150964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:45:12.150988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:45:12.151003 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:45:12.151017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:45:12.151031 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:45:12.151045 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:45:12.151277 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:45:12.152513 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:45:12.152697 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:45:11 UTC (1772847911) Mar 7 01:45:12.152882 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 7 01:45:12.152904 kernel: intel_pstate: CPU model not supported Mar 7 01:45:12.152919 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:45:12.152933 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:45:12.152947 kernel: Segment Routing with IPv6 Mar 7 01:45:12.152961 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:45:12.152975 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:45:12.152989 kernel: Key type dns_resolver registered Mar 7 01:45:12.153003 kernel: IPI shorthand broadcast: enabled Mar 7 01:45:12.153026 kernel: sched_clock: Marking stable (1719004665, 227536151)->(2128251232, -181710416) Mar 7 01:45:12.153040 kernel: registered taskstats version 1 Mar 7 01:45:12.153054 kernel: Loading compiled-in X.509 certificates Mar 7 01:45:12.153068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:45:12.153081 kernel: Key type .fscrypt registered Mar 7 01:45:12.153095 kernel: Key type fscrypt-provisioning registered Mar 7 01:45:12.153108 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:45:12.153122 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:45:12.153136 kernel: ima: No architecture policies found Mar 7 01:45:12.153155 kernel: clk: Disabling unused clocks Mar 7 01:45:12.153169 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:45:12.153193 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:45:12.153210 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:45:12.153224 kernel: Run /init as init process Mar 7 01:45:12.153238 kernel: with arguments: Mar 7 01:45:12.153252 kernel: /init Mar 7 01:45:12.153266 kernel: with environment: Mar 7 01:45:12.153279 kernel: HOME=/ Mar 7 01:45:12.153292 kernel: TERM=linux Mar 7 01:45:12.153316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:45:12.153334 systemd[1]: Detected virtualization kvm. Mar 7 01:45:12.153349 systemd[1]: Detected architecture x86-64. Mar 7 01:45:12.153363 systemd[1]: Running in initrd. Mar 7 01:45:12.153377 systemd[1]: No hostname configured, using default hostname. Mar 7 01:45:12.153391 systemd[1]: Hostname set to . Mar 7 01:45:12.153407 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:45:12.153428 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:45:12.155137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:45:12.155159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:45:12.155175 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:45:12.155203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:45:12.155219 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:45:12.155233 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:45:12.155260 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:45:12.155275 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:45:12.155290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:45:12.155305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:45:12.155320 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:45:12.155335 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:45:12.155350 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:45:12.155364 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:45:12.155386 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:45:12.155401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:45:12.155415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:45:12.155431 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:45:12.155471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:45:12.155490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:45:12.155505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:45:12.155520 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:45:12.155543 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:45:12.155558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:45:12.155573 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:45:12.155587 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:45:12.155602 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:45:12.155617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:45:12.155632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:45:12.155710 systemd-journald[203]: Collecting audit messages is disabled. Mar 7 01:45:12.155752 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:45:12.155767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:45:12.155782 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:45:12.155803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:45:12.155818 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:45:12.155835 systemd-journald[203]: Journal started Mar 7 01:45:12.155864 systemd-journald[203]: Runtime Journal (/run/log/journal/4adda2af3e7c4a02baf74a09b92cc077) is 4.7M, max 38.0M, 33.2M free. Mar 7 01:45:12.091619 systemd-modules-load[204]: Inserted module 'overlay' Mar 7 01:45:12.177408 kernel: Bridge firewalling registered Mar 7 01:45:12.179477 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:45:12.158975 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 7 01:45:12.177835 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:45:12.178854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:45:12.181667 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:45:12.185624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:45:12.199712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:45:12.209782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:45:12.248788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:45:12.251391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:45:12.254913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:45:12.267842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:45:12.270456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:45:12.273652 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:45:12.290026 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:45:12.308112 dracut-cmdline[237]: dracut-dracut-053 Mar 7 01:45:12.317923 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:45:12.334482 systemd-resolved[233]: Positive Trust Anchors: Mar 7 01:45:12.334512 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:45:12.334559 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:45:12.339225 systemd-resolved[233]: Defaulting to hostname 'linux'. Mar 7 01:45:12.342407 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:45:12.344020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:45:12.439487 kernel: SCSI subsystem initialized Mar 7 01:45:12.451663 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:45:12.467493 kernel: iscsi: registered transport (tcp) Mar 7 01:45:12.497484 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:45:12.497594 kernel: QLogic iSCSI HBA Driver Mar 7 01:45:12.563903 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:45:12.571760 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:45:12.608077 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:45:12.608363 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:45:12.610683 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:45:12.661504 kernel: raid6: sse2x4 gen() 13883 MB/s Mar 7 01:45:12.679495 kernel: raid6: sse2x2 gen() 9708 MB/s Mar 7 01:45:12.698121 kernel: raid6: sse2x1 gen() 10234 MB/s Mar 7 01:45:12.698282 kernel: raid6: using algorithm sse2x4 gen() 13883 MB/s Mar 7 01:45:12.717155 kernel: raid6: .... xor() 7500 MB/s, rmw enabled Mar 7 01:45:12.717280 kernel: raid6: using ssse3x2 recovery algorithm Mar 7 01:45:12.744558 kernel: xor: automatically using best checksumming function avx Mar 7 01:45:12.965573 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:45:12.987105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:45:12.994915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:45:13.032085 systemd-udevd[423]: Using default interface naming scheme 'v255'. Mar 7 01:45:13.040691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:45:13.050714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:45:13.096747 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Mar 7 01:45:13.144849 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:45:13.162960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:45:13.297873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:45:13.309961 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:45:13.351905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:45:13.356251 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:45:13.359031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:45:13.360300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:45:13.369740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:45:13.404106 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:45:13.463480 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 7 01:45:13.476491 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:45:13.488841 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 7 01:45:13.510113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:45:13.510428 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:45:13.513662 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:45:13.514416 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:45:13.514644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:45:13.515415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:45:13.531918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:45:13.545493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:45:13.545605 kernel: GPT:17805311 != 125829119 Mar 7 01:45:13.545630 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:45:13.545648 kernel: GPT:17805311 != 125829119 Mar 7 01:45:13.547148 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:45:13.549071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:45:13.556881 kernel: ACPI: bus type USB registered Mar 7 01:45:13.556970 kernel: AVX version of gcm_enc/dec engaged. Mar 7 01:45:13.558410 kernel: AES CTR mode by8 optimization enabled Mar 7 01:45:13.561469 kernel: usbcore: registered new interface driver usbfs Mar 7 01:45:13.561506 kernel: usbcore: registered new interface driver hub Mar 7 01:45:13.564465 kernel: usbcore: registered new device driver usb Mar 7 01:45:13.608540 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 7 01:45:13.608998 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 7 01:45:13.611283 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 01:45:13.612480 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 7 01:45:13.612730 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 7 01:45:13.612959 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 7 01:45:13.613198 kernel: hub 1-0:1.0: USB hub found Mar 7 01:45:13.613507 kernel: hub 1-0:1.0: 4 ports detected Mar 7 01:45:13.614504 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 01:45:13.615548 kernel: hub 2-0:1.0: USB hub found Mar 7 01:45:13.615808 kernel: hub 2-0:1.0: 4 ports detected Mar 7 01:45:13.671500 kernel: libata version 3.00 loaded. Mar 7 01:45:13.674475 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Mar 7 01:45:13.681480 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (473) Mar 7 01:45:13.694514 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:45:13.695584 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:45:13.701503 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:45:13.701848 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:45:13.707783 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:45:13.773199 kernel: scsi host0: ahci Mar 7 01:45:13.773648 kernel: scsi host1: ahci Mar 7 01:45:13.773876 kernel: scsi host2: ahci Mar 7 01:45:13.774091 kernel: scsi host3: ahci Mar 7 01:45:13.774323 kernel: scsi host4: ahci Mar 7 01:45:13.774580 kernel: scsi host5: ahci Mar 7 01:45:13.774911 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 7 01:45:13.774936 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 7 01:45:13.774955 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 7 01:45:13.774973 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 7 01:45:13.774991 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 7 01:45:13.775010 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 7 01:45:13.775226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:45:13.783745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:45:13.790488 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:45:13.791424 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:45:13.804942 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:45:13.821864 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:45:13.825650 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:45:13.834324 disk-uuid[566]: Primary Header is updated. Mar 7 01:45:13.834324 disk-uuid[566]: Secondary Entries is updated. Mar 7 01:45:13.834324 disk-uuid[566]: Secondary Header is updated. Mar 7 01:45:13.841635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:45:13.848041 kernel: GPT:disk_guids don't match. Mar 7 01:45:13.848157 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:45:13.848182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:45:13.864517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:45:13.867867 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:45:13.873509 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 01:45:14.046503 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:45:14.073596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.073722 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.073746 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.075465 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.077510 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.077545 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:45:14.103715 kernel: usbcore: registered new interface driver usbhid Mar 7 01:45:14.103853 kernel: usbhid: USB HID core driver Mar 7 01:45:14.126512 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 7 01:45:14.132503 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 7 01:45:14.861510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:45:14.862260 disk-uuid[567]: The operation has completed successfully. Mar 7 01:45:14.918884 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:45:14.919070 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:45:14.943721 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:45:14.962187 sh[590]: Success Mar 7 01:45:14.981533 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 7 01:45:15.049379 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:45:15.059582 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:45:15.063436 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:45:15.093575 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:45:15.093663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:45:15.095663 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:45:15.097825 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:45:15.100481 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:45:15.111517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:45:15.113208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:45:15.122746 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:45:15.126633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:45:15.148399 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:45:15.148686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:45:15.148740 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:45:15.154475 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:45:15.170125 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:45:15.173960 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:45:15.187185 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:45:15.194765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:45:15.331071 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:45:15.348827 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:45:15.432810 systemd-networkd[773]: lo: Link UP Mar 7 01:45:15.432832 systemd-networkd[773]: lo: Gained carrier Mar 7 01:45:15.436018 systemd-networkd[773]: Enumeration completed Mar 7 01:45:15.436235 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:45:15.437828 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:45:15.437834 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:45:15.439737 systemd[1]: Reached target network.target - Network. Mar 7 01:45:15.441964 systemd-networkd[773]: eth0: Link UP Mar 7 01:45:15.441970 systemd-networkd[773]: eth0: Gained carrier Mar 7 01:45:15.441994 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:45:15.487215 ignition[700]: Ignition 2.19.0 Mar 7 01:45:15.487240 ignition[700]: Stage: fetch-offline Mar 7 01:45:15.487306 ignition[700]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:15.487325 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:15.490811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:45:15.487479 ignition[700]: parsed url from cmdline: "" Mar 7 01:45:15.487487 ignition[700]: no config URL provided Mar 7 01:45:15.487497 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:45:15.487514 ignition[700]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:45:15.487524 ignition[700]: failed to fetch config: resource requires networking Mar 7 01:45:15.496562 systemd-networkd[773]: eth0: DHCPv4 address 10.230.57.158/30, gateway 10.230.57.157 acquired from 10.230.57.157 Mar 7 01:45:15.487827 ignition[700]: Ignition finished successfully Mar 7 01:45:15.500658 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:45:15.539230 ignition[780]: Ignition 2.19.0 Mar 7 01:45:15.540372 ignition[780]: Stage: fetch Mar 7 01:45:15.540648 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:15.540672 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:15.540827 ignition[780]: parsed url from cmdline: "" Mar 7 01:45:15.540834 ignition[780]: no config URL provided Mar 7 01:45:15.540844 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:45:15.540861 ignition[780]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:45:15.540997 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 7 01:45:15.541046 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 7 01:45:15.541087 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 7 01:45:15.560157 ignition[780]: GET result: OK Mar 7 01:45:15.560793 ignition[780]: parsing config with SHA512: 5b0ded53b8346953b1ac8f3bb841c15f74f042d1b4ab00e0ff0747a66855cb1ee2e95d7dbe798c3df795ee2c516630eb2b09cec0bede529b7201e51a094033b8 Mar 7 01:45:15.571769 unknown[780]: fetched base config from "system" Mar 7 01:45:15.571795 unknown[780]: fetched base config from "system" Mar 7 01:45:15.572432 ignition[780]: fetch: fetch complete Mar 7 01:45:15.571806 unknown[780]: fetched user config from "openstack" Mar 7 01:45:15.572462 ignition[780]: fetch: fetch passed Mar 7 01:45:15.574852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:45:15.572545 ignition[780]: Ignition finished successfully Mar 7 01:45:15.586956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:45:15.618337 ignition[787]: Ignition 2.19.0 Mar 7 01:45:15.618360 ignition[787]: Stage: kargs Mar 7 01:45:15.618706 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:15.621585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:45:15.618728 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:15.619997 ignition[787]: kargs: kargs passed Mar 7 01:45:15.620114 ignition[787]: Ignition finished successfully Mar 7 01:45:15.631722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:45:15.654326 ignition[793]: Ignition 2.19.0 Mar 7 01:45:15.654682 ignition[793]: Stage: disks Mar 7 01:45:15.654961 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:15.654983 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:15.656418 ignition[793]: disks: disks passed Mar 7 01:45:15.660585 ignition[793]: Ignition finished successfully Mar 7 01:45:15.662916 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:45:15.664708 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:45:15.665566 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:45:15.667176 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:45:15.668826 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:45:15.670236 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:45:15.677686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:45:15.701629 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:45:15.706302 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:45:15.714618 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:45:15.849479 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:45:15.850581 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:45:15.852004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:45:15.866673 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:45:15.870562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:45:15.871726 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:45:15.874137 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 7 01:45:15.876358 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:45:15.876409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:45:15.885472 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Mar 7 01:45:15.892163 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:45:15.892206 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:45:15.892227 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:45:15.899535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:45:15.914092 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:45:15.907136 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:45:15.912717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:45:15.997716 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:45:16.009757 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:45:16.021109 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:45:16.036311 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:45:16.151390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:45:16.166689 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:45:16.172720 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:45:16.185030 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:45:16.188928 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:45:16.222395 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:45:16.233323 ignition[925]: INFO : Ignition 2.19.0 Mar 7 01:45:16.233323 ignition[925]: INFO : Stage: mount Mar 7 01:45:16.235237 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:16.235237 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:16.237935 ignition[925]: INFO : mount: mount passed Mar 7 01:45:16.237935 ignition[925]: INFO : Ignition finished successfully Mar 7 01:45:16.237049 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:45:17.253927 systemd-networkd[773]: eth0: Gained IPv6LL Mar 7 01:45:18.764215 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e67:24:19ff:fee6:399e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e67:24:19ff:fee6:399e/64 assigned by NDisc. Mar 7 01:45:18.764233 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 7 01:45:23.085498 coreos-metadata[811]: Mar 07 01:45:23.084 WARN failed to locate config-drive, using the metadata service API instead Mar 7 01:45:23.106358 coreos-metadata[811]: Mar 07 01:45:23.106 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 7 01:45:23.123535 coreos-metadata[811]: Mar 07 01:45:23.123 INFO Fetch successful Mar 7 01:45:23.124565 coreos-metadata[811]: Mar 07 01:45:23.124 INFO wrote hostname srv-wuc9t.gb1.brightbox.com to /sysroot/etc/hostname Mar 7 01:45:23.127079 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 7 01:45:23.127263 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 7 01:45:23.137663 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:45:23.149920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:45:23.169539 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 7 01:45:23.175621 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:45:23.175679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:45:23.177492 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:45:23.183488 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:45:23.186944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:45:23.223409 ignition[960]: INFO : Ignition 2.19.0 Mar 7 01:45:23.225495 ignition[960]: INFO : Stage: files Mar 7 01:45:23.225495 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:23.225495 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:23.238753 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:45:23.238753 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:45:23.238753 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:45:23.243896 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:45:23.245092 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:45:23.245092 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:45:23.245031 unknown[960]: wrote ssh authorized keys file for user: core Mar 7 01:45:23.250238 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:45:23.250238 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:45:23.407591 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:45:23.798532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:45:23.798532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:45:23.801377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:45:23.816799 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:45:23.816799 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:45:23.816799 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:45:23.816799 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:45:23.816799 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:45:24.111021 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:45:25.334936 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:45:25.334936 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:45:25.338874 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:45:25.338874 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:45:25.338874 ignition[960]: INFO : files: files passed Mar 7 01:45:25.338874 ignition[960]: INFO : Ignition finished successfully Mar 7 01:45:25.343149 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:45:25.355881 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:45:25.367922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:45:25.386105 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:45:25.386320 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:45:25.398560 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:45:25.401288 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:45:25.402567 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:45:25.404745 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:45:25.405955 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:45:25.416163 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:45:25.458841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:45:25.459014 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:45:25.461296 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:45:25.462517 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:45:25.464188 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:45:25.476864 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:45:25.495636 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:45:25.503751 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:45:25.520140 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:45:25.522136 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:45:25.523703 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:45:25.524883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:45:25.525081 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:45:25.526980 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:45:25.527978 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:45:25.529526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:45:25.531030 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:45:25.532497 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:45:25.534100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:45:25.535654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:45:25.537238 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:45:25.538744 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:45:25.540290 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:45:25.541671 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:45:25.541885 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:45:25.543625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:45:25.544751 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:45:25.546126 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:45:25.546499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:45:25.548017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:45:25.548257 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:45:25.550323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:45:25.550608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:45:25.552529 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:45:25.552781 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:45:25.570547 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:45:25.572720 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:45:25.575375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:45:25.575610 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:45:25.578854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:45:25.579269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:45:25.590087 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:45:25.590252 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:45:25.602325 ignition[1012]: INFO : Ignition 2.19.0 Mar 7 01:45:25.602325 ignition[1012]: INFO : Stage: umount Mar 7 01:45:25.606259 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:45:25.606259 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 7 01:45:25.606259 ignition[1012]: INFO : umount: umount passed Mar 7 01:45:25.606259 ignition[1012]: INFO : Ignition finished successfully Mar 7 01:45:25.607532 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:45:25.607694 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:45:25.610568 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:45:25.610712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:45:25.614050 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:45:25.614142 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:45:25.614945 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:45:25.615022 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:45:25.617580 systemd[1]: Stopped target network.target - Network. Mar 7 01:45:25.618213 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:45:25.618293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:45:25.619101 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:45:25.621798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:45:25.625520 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:45:25.626329 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:45:25.627929 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:45:25.628680 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:45:25.629525 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:45:25.630232 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:45:25.631260 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:45:25.632036 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:45:25.632121 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:45:25.633653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:45:25.633847 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:45:25.635046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:45:25.637808 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:45:25.641592 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:45:25.644814 systemd-networkd[773]: eth0: DHCPv6 lease lost Mar 7 01:45:25.665566 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:45:25.665938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:45:25.670161 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:45:25.670390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:45:25.675664 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:45:25.676595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:45:25.683625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:45:25.684392 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:45:25.684506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:45:25.687120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:45:25.687194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:45:25.688977 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:45:25.689085 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:45:25.690921 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:45:25.691000 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:45:25.695131 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:45:25.710560 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:45:25.711805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:45:25.714685 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:45:25.714850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:45:25.717658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:45:25.717818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:45:25.719925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:45:25.719991 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:45:25.721636 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:45:25.721723 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:45:25.723963 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:45:25.724042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:45:25.725396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:45:25.725506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:45:25.739737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:45:25.741043 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:45:25.741139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:45:25.741977 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:45:25.742048 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:45:25.742878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:45:25.742948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:45:25.744535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:45:25.744611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:45:25.752334 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:45:25.752513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:45:25.768167 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:45:25.768388 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:45:25.770725 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:45:25.771536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:45:25.771631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:45:25.778688 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:45:25.792019 systemd[1]: Switching root. Mar 7 01:45:25.828033 systemd-journald[203]: Journal stopped Mar 7 01:45:27.522262 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 7 01:45:27.524623 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:45:27.524681 kernel: SELinux: policy capability open_perms=1 Mar 7 01:45:27.524722 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:45:27.524754 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:45:27.524782 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:45:27.524804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:45:27.524831 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:45:27.524889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:45:27.524968 kernel: audit: type=1403 audit(1772847926.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:45:27.525016 systemd[1]: Successfully loaded SELinux policy in 83.129ms. Mar 7 01:45:27.525083 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.764ms. Mar 7 01:45:27.525132 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:45:27.525169 systemd[1]: Detected virtualization kvm. Mar 7 01:45:27.525203 systemd[1]: Detected architecture x86-64. Mar 7 01:45:27.525239 systemd[1]: Detected first boot. Mar 7 01:45:27.525295 systemd[1]: Hostname set to . Mar 7 01:45:27.525320 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:45:27.525351 zram_generator::config[1055]: No configuration found. Mar 7 01:45:27.525383 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:45:27.525407 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:45:27.525427 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:45:27.525484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:45:27.525509 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:45:27.525545 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:45:27.525604 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:45:27.525628 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:45:27.525661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:45:27.525683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:45:27.525725 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:45:27.525748 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:45:27.525770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:45:27.525792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:45:27.525823 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:45:27.525882 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:45:27.525921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:45:27.525944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:45:27.526002 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:45:27.526025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:45:27.526045 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:45:27.526110 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:45:27.526171 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:45:27.526196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:45:27.526227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:45:27.526272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:45:27.526318 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:45:27.526350 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:45:27.526373 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:45:27.526394 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:45:27.526469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:45:27.526570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:45:27.526596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:45:27.526626 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:45:27.526648 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:45:27.526678 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:45:27.526741 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:45:27.526766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:27.526787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:45:27.526834 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:45:27.526857 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:45:27.526892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:45:27.526915 systemd[1]: Reached target machines.target - Containers. Mar 7 01:45:27.526936 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:45:27.526989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:45:27.527014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:45:27.527061 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:45:27.527098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:45:27.527120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:45:27.527149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:45:27.527171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:45:27.527192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:45:27.527233 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:45:27.527270 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:45:27.527293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:45:27.527322 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:45:27.527345 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:45:27.527376 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:45:27.527397 kernel: loop: module loaded Mar 7 01:45:27.527417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:45:27.527510 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:45:27.527552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:45:27.527576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:45:27.527611 kernel: ACPI: bus type drm_connector registered Mar 7 01:45:27.527633 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:45:27.527655 systemd[1]: Stopped verity-setup.service. Mar 7 01:45:27.527676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:27.527716 kernel: fuse: init (API version 7.39) Mar 7 01:45:27.527738 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:45:27.527759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:45:27.527810 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:45:27.527844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:45:27.527875 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:45:27.527906 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:45:27.527977 systemd-journald[1148]: Collecting audit messages is disabled. Mar 7 01:45:27.528092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:45:27.528119 systemd-journald[1148]: Journal started Mar 7 01:45:27.528160 systemd-journald[1148]: Runtime Journal (/run/log/journal/4adda2af3e7c4a02baf74a09b92cc077) is 4.7M, max 38.0M, 33.2M free. Mar 7 01:45:27.528296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:45:27.051520 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:45:27.079903 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:45:27.080885 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:45:27.534535 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:45:27.538932 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:45:27.540898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:45:27.541579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:45:27.543111 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:45:27.543890 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:45:27.545554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:45:27.546140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:45:27.548146 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:45:27.549919 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:45:27.550492 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:45:27.552060 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:45:27.552543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:45:27.554272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:45:27.556055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:45:27.557831 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:45:27.574904 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:45:27.583586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:45:27.593564 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:45:27.595733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:45:27.595795 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:45:27.600203 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:45:27.610178 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:45:27.617676 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:45:27.618737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:45:27.626339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:45:27.632621 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:45:27.633555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:45:27.640715 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:45:27.641647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:45:27.647499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:45:27.660144 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:45:27.664581 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:45:27.669241 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:45:27.670679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:45:27.672320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:45:27.686291 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:45:27.687431 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:45:27.724060 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:45:27.778528 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:45:27.780557 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:45:27.825612 kernel: loop0: detected capacity change from 0 to 217752 Mar 7 01:45:27.822316 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:45:27.822344 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:45:27.837624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:45:27.843599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:45:27.855688 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:45:27.861054 systemd-journald[1148]: Time spent on flushing to /var/log/journal/4adda2af3e7c4a02baf74a09b92cc077 is 61.978ms for 1152 entries. Mar 7 01:45:27.861054 systemd-journald[1148]: System Journal (/var/log/journal/4adda2af3e7c4a02baf74a09b92cc077) is 8.0M, max 584.8M, 576.8M free. Mar 7 01:45:27.985001 systemd-journald[1148]: Received client request to flush runtime journal. Mar 7 01:45:27.985111 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:45:27.985146 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:45:27.979847 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:45:27.991778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:45:27.998181 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:45:28.078055 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 7 01:45:28.078090 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 7 01:45:28.099490 kernel: loop2: detected capacity change from 0 to 8 Mar 7 01:45:28.113350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:45:28.116071 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:45:28.127800 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:45:28.134535 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:45:28.159980 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:45:28.228149 kernel: loop4: detected capacity change from 0 to 217752 Mar 7 01:45:28.299531 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:45:28.354529 kernel: loop6: detected capacity change from 0 to 8 Mar 7 01:45:28.369589 kernel: loop7: detected capacity change from 0 to 140768 Mar 7 01:45:28.443013 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 7 01:45:28.445112 (sd-merge)[1216]: Merged extensions into '/usr'. Mar 7 01:45:28.461732 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:45:28.461769 systemd[1]: Reloading... Mar 7 01:45:28.687484 zram_generator::config[1242]: No configuration found. Mar 7 01:45:28.845815 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:45:28.963020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:45:29.032606 systemd[1]: Reloading finished in 569 ms. Mar 7 01:45:29.084701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:45:29.094895 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:45:29.114901 systemd[1]: Starting ensure-sysext.service... Mar 7 01:45:29.133877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:45:29.136070 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:45:29.142917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:45:29.157686 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:45:29.157745 systemd[1]: Reloading... Mar 7 01:45:29.182960 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:45:29.184333 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:45:29.186343 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:45:29.187041 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Mar 7 01:45:29.187271 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Mar 7 01:45:29.192948 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:45:29.193080 systemd-tmpfiles[1299]: Skipping /boot Mar 7 01:45:29.217346 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:45:29.217639 systemd-tmpfiles[1299]: Skipping /boot Mar 7 01:45:29.228042 systemd-udevd[1301]: Using default interface naming scheme 'v255'. Mar 7 01:45:29.304402 zram_generator::config[1327]: No configuration found. Mar 7 01:45:29.521532 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1333) Mar 7 01:45:29.592471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:45:29.658505 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:45:29.710475 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:45:29.713685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:45:29.714895 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:45:29.717832 systemd[1]: Reloading finished in 559 ms. Mar 7 01:45:29.735507 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:45:29.777732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:45:29.789158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:45:29.797481 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:45:29.802488 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:45:29.802536 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:45:29.803757 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:45:29.851793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:29.862858 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:45:29.874890 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:45:29.877110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:45:29.884843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:45:29.890872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:45:29.917019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:45:29.918124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:45:29.929152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:45:29.939913 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:45:29.953824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:45:29.960304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:45:29.966801 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:45:29.969561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:29.989518 systemd[1]: Finished ensure-sysext.service. Mar 7 01:45:29.999521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:29.999907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:45:30.006872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:45:30.007842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:45:30.034874 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:45:30.036551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:45:30.050836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:45:30.053379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:45:30.054765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:45:30.062895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:45:30.064929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:45:30.071611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:45:30.120335 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:45:30.130404 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:45:30.130806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:45:30.136811 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:45:30.137107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:45:30.139726 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:45:30.143872 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:45:30.158218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:45:30.163290 augenrules[1444]: No rules Mar 7 01:45:30.165077 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:45:30.171727 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:45:30.173822 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:45:30.258580 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:45:30.268642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:45:30.330102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:45:30.347701 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:45:30.452164 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:45:30.510766 systemd-networkd[1426]: lo: Link UP Mar 7 01:45:30.510781 systemd-networkd[1426]: lo: Gained carrier Mar 7 01:45:30.513360 systemd-networkd[1426]: Enumeration completed Mar 7 01:45:30.513960 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:45:30.513966 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:45:30.516261 systemd-networkd[1426]: eth0: Link UP Mar 7 01:45:30.516274 systemd-networkd[1426]: eth0: Gained carrier Mar 7 01:45:30.516292 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:45:30.531573 systemd-networkd[1426]: eth0: DHCPv4 address 10.230.57.158/30, gateway 10.230.57.157 acquired from 10.230.57.157 Mar 7 01:45:30.560641 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:45:30.565985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:45:30.573230 systemd-resolved[1428]: Positive Trust Anchors: Mar 7 01:45:30.573632 systemd-resolved[1428]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:45:30.573690 systemd-resolved[1428]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:45:30.578788 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:45:30.585845 systemd-resolved[1428]: Using system hostname 'srv-wuc9t.gb1.brightbox.com'. Mar 7 01:45:30.586731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:45:30.587784 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:45:30.589872 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:45:30.591239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:45:30.593708 systemd[1]: Reached target network.target - Network. Mar 7 01:45:30.594353 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:45:30.619075 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:45:30.655340 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:45:30.657250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:45:30.658087 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:45:30.659108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:45:30.660008 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:45:30.661111 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:45:30.662031 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:45:30.662847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:45:30.663627 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:45:30.663672 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:45:30.664311 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:45:30.666007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:45:30.669324 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:45:30.683166 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:45:30.686513 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:45:30.688207 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:45:30.689112 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:45:30.689831 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:45:30.690550 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:45:30.690612 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:45:30.698670 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:45:30.710682 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:45:30.719707 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:45:30.724823 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:45:30.738627 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:45:30.743899 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:45:30.744746 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:45:30.753852 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:45:30.760817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:45:30.764993 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:45:30.775779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:45:30.781074 jq[1481]: false Mar 7 01:45:30.786731 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:45:30.789549 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:45:30.790388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:45:30.801756 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:45:30.814062 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:45:30.818131 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:45:30.822823 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:45:30.823500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:45:30.827432 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:45:30.828154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:45:30.843073 extend-filesystems[1482]: Found loop4 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found loop5 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found loop6 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found loop7 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda1 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda2 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda3 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found usr Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda4 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda6 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda7 Mar 7 01:45:30.846584 extend-filesystems[1482]: Found vda9 Mar 7 01:45:30.846584 extend-filesystems[1482]: Checking size of /dev/vda9 Mar 7 01:45:30.918198 extend-filesystems[1482]: Resized partition /dev/vda9 Mar 7 01:45:30.926603 jq[1491]: true Mar 7 01:45:30.906539 dbus-daemon[1478]: [system] SELinux support is enabled Mar 7 01:45:30.936625 update_engine[1488]: I20260307 01:45:30.862728 1488 main.cc:92] Flatcar Update Engine starting Mar 7 01:45:30.936625 update_engine[1488]: I20260307 01:45:30.920734 1488 update_check_scheduler.cc:74] Next update check in 2m10s Mar 7 01:45:30.887354 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:45:30.913968 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1426 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:45:30.940033 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:45:30.947099 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 7 01:45:30.888609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:45:30.925438 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:45:30.908639 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:45:30.918074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:45:30.947846 jq[1508]: true Mar 7 01:45:30.918128 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:45:30.922065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:45:30.922097 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:45:30.922581 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:45:30.925801 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:45:30.942730 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:45:30.953714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:45:30.965714 tar[1500]: linux-amd64/LICENSE Mar 7 01:45:30.981785 tar[1500]: linux-amd64/helm Mar 7 01:45:30.985965 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:45:31.241517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1330) Mar 7 01:45:31.311495 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:45:31.311923 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:45:31.335649 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:45:31.336739 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:45:31.340291 systemd-logind[1487]: New seat seat0. Mar 7 01:45:31.343762 systemd[1]: Starting sshkeys.service... Mar 7 01:45:31.346474 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:45:31.389434 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:45:31.403790 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:45:31.416196 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:45:31.509494 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 7 01:45:31.562562 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:45:31.562562 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 7 01:45:31.562562 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 7 01:45:31.567991 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Mar 7 01:45:31.565138 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:45:31.565821 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:45:31.575692 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:45:31.576224 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:45:31.576620 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:45:31.591790 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:45:31.635223 polkitd[1553]: Started polkitd version 121 Mar 7 01:45:31.670281 polkitd[1553]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:45:31.679510 polkitd[1553]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:45:31.684395 polkitd[1553]: Finished loading, compiling and executing 2 rules Mar 7 01:45:31.685393 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:45:31.687190 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:45:31.692597 polkitd[1553]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:45:32.769692 systemd-timesyncd[1434]: Contacted time server 176.58.109.199:123 (0.flatcar.pool.ntp.org). Mar 7 01:45:32.769812 systemd-timesyncd[1434]: Initial clock synchronization to Sat 2026-03-07 01:45:32.769358 UTC. Mar 7 01:45:32.770044 systemd-resolved[1428]: Clock change detected. Flushing caches. Mar 7 01:45:32.771140 systemd-networkd[1426]: eth0: Gained IPv6LL Mar 7 01:45:32.782478 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:45:32.785205 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:45:32.799114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:45:32.819243 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:45:32.837558 systemd-hostnamed[1517]: Hostname set to (static) Mar 7 01:45:32.877199 containerd[1510]: time="2026-03-07T01:45:32.876982958Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:45:32.945538 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:45:32.984253 containerd[1510]: time="2026-03-07T01:45:32.982455539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006160890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006234328Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006267324Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006681465Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006712402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006838741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.006865397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.007158176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.007184088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.007206448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:45:33.007794 containerd[1510]: time="2026-03-07T01:45:33.007224124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.008283 containerd[1510]: time="2026-03-07T01:45:33.007376356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.015412 containerd[1510]: time="2026-03-07T01:45:33.014317617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:45:33.015801 containerd[1510]: time="2026-03-07T01:45:33.015733603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:45:33.018686 containerd[1510]: time="2026-03-07T01:45:33.016944763Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:45:33.018686 containerd[1510]: time="2026-03-07T01:45:33.017187811Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:45:33.019088 containerd[1510]: time="2026-03-07T01:45:33.019056563Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:45:33.029471 containerd[1510]: time="2026-03-07T01:45:33.029414880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:45:33.029590 containerd[1510]: time="2026-03-07T01:45:33.029506562Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:45:33.029590 containerd[1510]: time="2026-03-07T01:45:33.029540712Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:45:33.029590 containerd[1510]: time="2026-03-07T01:45:33.029568762Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:45:33.029725 containerd[1510]: time="2026-03-07T01:45:33.029593377Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:45:33.029970 containerd[1510]: time="2026-03-07T01:45:33.029870313Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030232284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030472357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030502859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030524246Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030606409Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.030881 containerd[1510]: time="2026-03-07T01:45:33.030636443Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.031556 containerd[1510]: time="2026-03-07T01:45:33.031522124Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.031677 containerd[1510]: time="2026-03-07T01:45:33.031637705Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.031794 containerd[1510]: time="2026-03-07T01:45:33.031755832Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032045401Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032082183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032179929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032226020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032250522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032270562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032291995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032311359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032348810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032375594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032396540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032433228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032459810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.034680 containerd[1510]: time="2026-03-07T01:45:33.032479421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032498406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032531366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032560370Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032619465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032644823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032677805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032743204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032816958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032841887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032862678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032879907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032899226Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032914667Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:45:33.035181 containerd[1510]: time="2026-03-07T01:45:33.032930967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:45:33.035617 containerd[1510]: time="2026-03-07T01:45:33.033418042Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:45:33.035617 containerd[1510]: time="2026-03-07T01:45:33.033524569Z" level=info msg="Connect containerd service" Mar 7 01:45:33.035617 containerd[1510]: time="2026-03-07T01:45:33.033601351Z" level=info msg="using legacy CRI server" Mar 7 01:45:33.035617 containerd[1510]: time="2026-03-07T01:45:33.033619899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:45:33.078831 containerd[1510]: time="2026-03-07T01:45:33.075819300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:45:33.078831 containerd[1510]: time="2026-03-07T01:45:33.077961419Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:45:33.079172 containerd[1510]: time="2026-03-07T01:45:33.079107993Z" level=info msg="Start subscribing containerd event" Mar 7 01:45:33.079757 containerd[1510]: time="2026-03-07T01:45:33.079270144Z" level=info msg="Start recovering state" Mar 7 01:45:33.079757 containerd[1510]: time="2026-03-07T01:45:33.079397028Z" level=info msg="Start event monitor" Mar 7 01:45:33.079757 containerd[1510]: time="2026-03-07T01:45:33.079426109Z" level=info msg="Start snapshots syncer" Mar 7 01:45:33.079757 containerd[1510]: time="2026-03-07T01:45:33.079449321Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:45:33.079757 containerd[1510]: time="2026-03-07T01:45:33.079472824Z" level=info msg="Start streaming server" Mar 7 01:45:33.080395 containerd[1510]: time="2026-03-07T01:45:33.080365596Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:45:33.081976 containerd[1510]: time="2026-03-07T01:45:33.081947422Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:45:33.083116 containerd[1510]: time="2026-03-07T01:45:33.083088635Z" level=info msg="containerd successfully booted in 0.210069s" Mar 7 01:45:33.083252 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:45:33.590107 systemd-networkd[1426]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e67:24:19ff:fee6:399e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e67:24:19ff:fee6:399e/64 assigned by NDisc. Mar 7 01:45:33.590786 systemd-networkd[1426]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 7 01:45:33.592882 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:45:33.700307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:45:33.714188 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:45:33.722942 systemd[1]: Started sshd@0-10.230.57.158:22-4.153.228.146:57076.service - OpenSSH per-connection server daemon (4.153.228.146:57076). Mar 7 01:45:33.762032 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:45:33.762327 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:45:33.770245 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:45:33.805995 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:45:33.816491 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:45:33.824322 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:45:33.827604 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:45:34.054002 tar[1500]: linux-amd64/README.md Mar 7 01:45:34.082626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:45:34.348423 sshd[1589]: Accepted publickey for core from 4.153.228.146 port 57076 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:34.353474 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:34.380474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:45:34.396223 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:45:34.403568 systemd-logind[1487]: New session 1 of user core. Mar 7 01:45:34.435354 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:45:34.448870 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:45:34.533077 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:45:34.716536 systemd[1603]: Queued start job for default target default.target. Mar 7 01:45:34.728473 systemd[1603]: Created slice app.slice - User Application Slice. Mar 7 01:45:34.728522 systemd[1603]: Reached target paths.target - Paths. Mar 7 01:45:34.728551 systemd[1603]: Reached target timers.target - Timers. Mar 7 01:45:34.732365 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:45:34.751075 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:45:34.751306 systemd[1603]: Reached target sockets.target - Sockets. Mar 7 01:45:34.751333 systemd[1603]: Reached target basic.target - Basic System. Mar 7 01:45:34.751434 systemd[1603]: Reached target default.target - Main User Target. Mar 7 01:45:34.751513 systemd[1603]: Startup finished in 202ms. Mar 7 01:45:34.751797 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:45:34.763193 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:45:34.945638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:45:34.957330 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:45:35.188909 systemd[1]: Started sshd@1-10.230.57.158:22-4.153.228.146:57082.service - OpenSSH per-connection server daemon (4.153.228.146:57082). Mar 7 01:45:35.777853 sshd[1625]: Accepted publickey for core from 4.153.228.146 port 57082 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:35.791106 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:35.834946 systemd-logind[1487]: New session 2 of user core. Mar 7 01:45:35.842004 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:45:35.865914 kubelet[1618]: E0307 01:45:35.865846 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:45:35.869597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:45:35.869906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:45:35.870844 systemd[1]: kubelet.service: Consumed 1.773s CPU time. Mar 7 01:45:36.185134 sshd[1625]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:36.190428 systemd[1]: sshd@1-10.230.57.158:22-4.153.228.146:57082.service: Deactivated successfully. Mar 7 01:45:36.193158 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:45:36.195373 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:45:36.197096 systemd-logind[1487]: Removed session 2. Mar 7 01:45:36.299181 systemd[1]: Started sshd@2-10.230.57.158:22-4.153.228.146:57088.service - OpenSSH per-connection server daemon (4.153.228.146:57088). Mar 7 01:45:36.907318 sshd[1635]: Accepted publickey for core from 4.153.228.146 port 57088 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:36.909579 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:36.916456 systemd-logind[1487]: New session 3 of user core. Mar 7 01:45:36.929025 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:45:37.321363 sshd[1635]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:37.334134 systemd[1]: sshd@2-10.230.57.158:22-4.153.228.146:57088.service: Deactivated successfully. Mar 7 01:45:37.340803 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:45:37.344438 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:45:37.348612 systemd-logind[1487]: Removed session 3. Mar 7 01:45:38.900645 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:45:38.906139 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:45:38.918072 systemd-logind[1487]: New session 4 of user core. Mar 7 01:45:38.927363 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:45:38.932570 coreos-metadata[1477]: Mar 07 01:45:38.932 WARN failed to locate config-drive, using the metadata service API instead Mar 7 01:45:38.935734 systemd-logind[1487]: New session 5 of user core. Mar 7 01:45:38.941190 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:45:38.980847 coreos-metadata[1477]: Mar 07 01:45:38.980 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 7 01:45:38.985356 coreos-metadata[1477]: Mar 07 01:45:38.985 INFO Fetch failed with 404: resource not found Mar 7 01:45:38.985356 coreos-metadata[1477]: Mar 07 01:45:38.985 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 7 01:45:38.986232 coreos-metadata[1477]: Mar 07 01:45:38.986 INFO Fetch successful Mar 7 01:45:38.986386 coreos-metadata[1477]: Mar 07 01:45:38.986 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 7 01:45:39.000832 coreos-metadata[1477]: Mar 07 01:45:38.999 INFO Fetch successful Mar 7 01:45:39.001011 coreos-metadata[1477]: Mar 07 01:45:39.000 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 7 01:45:39.020010 coreos-metadata[1477]: Mar 07 01:45:39.019 INFO Fetch successful Mar 7 01:45:39.020200 coreos-metadata[1477]: Mar 07 01:45:39.020 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 7 01:45:39.036988 coreos-metadata[1477]: Mar 07 01:45:39.036 INFO Fetch successful Mar 7 01:45:39.037325 coreos-metadata[1477]: Mar 07 01:45:39.037 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 7 01:45:39.058800 coreos-metadata[1477]: Mar 07 01:45:39.058 INFO Fetch successful Mar 7 01:45:39.090620 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:45:39.092551 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:45:39.840819 coreos-metadata[1544]: Mar 07 01:45:39.840 WARN failed to locate config-drive, using the metadata service API instead Mar 7 01:45:39.866637 coreos-metadata[1544]: Mar 07 01:45:39.866 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 7 01:45:39.905241 coreos-metadata[1544]: Mar 07 01:45:39.905 INFO Fetch successful Mar 7 01:45:39.905588 coreos-metadata[1544]: Mar 07 01:45:39.905 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:45:39.935775 coreos-metadata[1544]: Mar 07 01:45:39.935 INFO Fetch successful Mar 7 01:45:39.938017 unknown[1544]: wrote ssh authorized keys file for user: core Mar 7 01:45:39.975771 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:45:39.977108 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:45:39.980962 systemd[1]: Finished sshkeys.service. Mar 7 01:45:39.984400 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:45:39.984904 systemd[1]: Startup finished in 1.912s (kernel) + 14.292s (initrd) + 12.914s (userspace) = 29.118s. Mar 7 01:45:43.733820 systemd[1]: Started sshd@3-10.230.57.158:22-185.156.73.233:61562.service - OpenSSH per-connection server daemon (185.156.73.233:61562). Mar 7 01:45:44.547335 sshd[1682]: Invalid user ubnt from 185.156.73.233 port 61562 Mar 7 01:45:44.617719 sshd[1682]: Connection closed by invalid user ubnt 185.156.73.233 port 61562 [preauth] Mar 7 01:45:44.619481 systemd[1]: sshd@3-10.230.57.158:22-185.156.73.233:61562.service: Deactivated successfully. Mar 7 01:45:46.120399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:45:46.129012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:45:46.533467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:45:46.549522 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:45:46.647121 kubelet[1694]: E0307 01:45:46.646618 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:45:46.651588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:45:46.651937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:45:47.431087 systemd[1]: Started sshd@4-10.230.57.158:22-4.153.228.146:34120.service - OpenSSH per-connection server daemon (4.153.228.146:34120). Mar 7 01:45:48.007415 sshd[1702]: Accepted publickey for core from 4.153.228.146 port 34120 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:48.009655 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:48.018582 systemd-logind[1487]: New session 6 of user core. Mar 7 01:45:48.029024 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:45:48.419878 sshd[1702]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:48.425616 systemd[1]: sshd@4-10.230.57.158:22-4.153.228.146:34120.service: Deactivated successfully. Mar 7 01:45:48.427998 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:45:48.429020 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:45:48.430776 systemd-logind[1487]: Removed session 6. Mar 7 01:45:48.514184 systemd[1]: Started sshd@5-10.230.57.158:22-4.153.228.146:34130.service - OpenSSH per-connection server daemon (4.153.228.146:34130). Mar 7 01:45:49.082726 sshd[1709]: Accepted publickey for core from 4.153.228.146 port 34130 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:49.084326 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:49.091252 systemd-logind[1487]: New session 7 of user core. Mar 7 01:45:49.097914 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:45:49.470072 sshd[1709]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:49.475371 systemd[1]: sshd@5-10.230.57.158:22-4.153.228.146:34130.service: Deactivated successfully. Mar 7 01:45:49.477786 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:45:49.478652 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:45:49.480093 systemd-logind[1487]: Removed session 7. Mar 7 01:45:49.574898 systemd[1]: Started sshd@6-10.230.57.158:22-4.153.228.146:39304.service - OpenSSH per-connection server daemon (4.153.228.146:39304). Mar 7 01:45:50.152579 sshd[1716]: Accepted publickey for core from 4.153.228.146 port 39304 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:50.154962 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:50.162229 systemd-logind[1487]: New session 8 of user core. Mar 7 01:45:50.168916 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:45:50.562680 sshd[1716]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:50.567554 systemd[1]: sshd@6-10.230.57.158:22-4.153.228.146:39304.service: Deactivated successfully. Mar 7 01:45:50.569990 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:45:50.571795 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:45:50.573309 systemd-logind[1487]: Removed session 8. Mar 7 01:45:50.660892 systemd[1]: Started sshd@7-10.230.57.158:22-4.153.228.146:39316.service - OpenSSH per-connection server daemon (4.153.228.146:39316). Mar 7 01:45:51.239540 sshd[1723]: Accepted publickey for core from 4.153.228.146 port 39316 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:51.241803 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:51.249029 systemd-logind[1487]: New session 9 of user core. Mar 7 01:45:51.258951 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:45:51.594480 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:45:51.595085 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:45:51.611028 sudo[1726]: pam_unix(sudo:session): session closed for user root Mar 7 01:45:51.703072 sshd[1723]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:51.709086 systemd[1]: sshd@7-10.230.57.158:22-4.153.228.146:39316.service: Deactivated successfully. Mar 7 01:45:51.711802 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:45:51.712718 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:45:51.714256 systemd-logind[1487]: Removed session 9. Mar 7 01:45:51.811130 systemd[1]: Started sshd@8-10.230.57.158:22-4.153.228.146:39320.service - OpenSSH per-connection server daemon (4.153.228.146:39320). Mar 7 01:45:52.361707 sshd[1731]: Accepted publickey for core from 4.153.228.146 port 39320 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:52.363968 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:52.371750 systemd-logind[1487]: New session 10 of user core. Mar 7 01:45:52.383042 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:45:52.672277 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:45:52.672819 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:45:52.679476 sudo[1735]: pam_unix(sudo:session): session closed for user root Mar 7 01:45:52.688482 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:45:52.688976 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:45:52.711503 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:45:52.716504 auditctl[1738]: No rules Mar 7 01:45:52.717081 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:45:52.717432 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:45:52.726447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:45:52.776517 augenrules[1756]: No rules Mar 7 01:45:52.777645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:45:52.779342 sudo[1734]: pam_unix(sudo:session): session closed for user root Mar 7 01:45:52.867510 sshd[1731]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:52.871398 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:45:52.872170 systemd[1]: sshd@8-10.230.57.158:22-4.153.228.146:39320.service: Deactivated successfully. Mar 7 01:45:52.874625 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:45:52.876761 systemd-logind[1487]: Removed session 10. Mar 7 01:45:52.966192 systemd[1]: Started sshd@9-10.230.57.158:22-4.153.228.146:39336.service - OpenSSH per-connection server daemon (4.153.228.146:39336). Mar 7 01:45:53.529025 sshd[1764]: Accepted publickey for core from 4.153.228.146 port 39336 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:45:53.531209 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:53.538497 systemd-logind[1487]: New session 11 of user core. Mar 7 01:45:53.545941 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:45:53.839000 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:45:53.839496 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:45:54.618150 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:45:54.632465 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:45:55.383708 dockerd[1783]: time="2026-03-07T01:45:55.383408022Z" level=info msg="Starting up" Mar 7 01:45:55.575180 dockerd[1783]: time="2026-03-07T01:45:55.574308903Z" level=info msg="Loading containers: start." Mar 7 01:45:55.737703 kernel: Initializing XFRM netlink socket Mar 7 01:45:55.865385 systemd-networkd[1426]: docker0: Link UP Mar 7 01:45:55.888770 dockerd[1783]: time="2026-03-07T01:45:55.888699808Z" level=info msg="Loading containers: done." Mar 7 01:45:55.913339 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3992675721-merged.mount: Deactivated successfully. Mar 7 01:45:55.918622 dockerd[1783]: time="2026-03-07T01:45:55.918483846Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:45:55.918768 dockerd[1783]: time="2026-03-07T01:45:55.918712004Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:45:55.918927 dockerd[1783]: time="2026-03-07T01:45:55.918883935Z" level=info msg="Daemon has completed initialization" Mar 7 01:45:55.974554 dockerd[1783]: time="2026-03-07T01:45:55.973959522Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:45:55.974977 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:45:56.800974 containerd[1510]: time="2026-03-07T01:45:56.800822453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:45:56.864041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:45:56.871956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:45:57.284996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:45:57.290860 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:45:57.394206 kubelet[1934]: E0307 01:45:57.394111 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:45:57.397725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:45:57.398044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:45:57.761971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970662109.mount: Deactivated successfully. Mar 7 01:45:59.828716 containerd[1510]: time="2026-03-07T01:45:59.827158310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:45:59.830610 containerd[1510]: time="2026-03-07T01:45:59.830550638Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696475" Mar 7 01:45:59.833237 containerd[1510]: time="2026-03-07T01:45:59.833180716Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:45:59.842239 containerd[1510]: time="2026-03-07T01:45:59.842160637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:45:59.844056 containerd[1510]: time="2026-03-07T01:45:59.844004987Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 3.043038072s" Mar 7 01:45:59.844242 containerd[1510]: time="2026-03-07T01:45:59.844210757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:45:59.846618 containerd[1510]: time="2026-03-07T01:45:59.846554176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:46:03.198881 containerd[1510]: time="2026-03-07T01:46:03.198458594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:03.201953 containerd[1510]: time="2026-03-07T01:46:03.201071268Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450708" Mar 7 01:46:03.203298 containerd[1510]: time="2026-03-07T01:46:03.203231071Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:03.209128 containerd[1510]: time="2026-03-07T01:46:03.208957708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:03.211694 containerd[1510]: time="2026-03-07T01:46:03.210561329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.363925072s" Mar 7 01:46:03.211694 containerd[1510]: time="2026-03-07T01:46:03.210757708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:46:03.213328 containerd[1510]: time="2026-03-07T01:46:03.213290247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:46:03.672636 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:46:04.902715 containerd[1510]: time="2026-03-07T01:46:04.901576195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:04.903755 containerd[1510]: time="2026-03-07T01:46:04.903643346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548437" Mar 7 01:46:04.904691 containerd[1510]: time="2026-03-07T01:46:04.904405054Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:04.910687 containerd[1510]: time="2026-03-07T01:46:04.909733292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:04.912015 containerd[1510]: time="2026-03-07T01:46:04.911746997Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.698400655s" Mar 7 01:46:04.912015 containerd[1510]: time="2026-03-07T01:46:04.911818288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:46:04.914846 containerd[1510]: time="2026-03-07T01:46:04.914797829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:46:06.807893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561009264.mount: Deactivated successfully. Mar 7 01:46:07.570520 containerd[1510]: time="2026-03-07T01:46:07.570433423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:07.572591 containerd[1510]: time="2026-03-07T01:46:07.572444436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685320" Mar 7 01:46:07.573561 containerd[1510]: time="2026-03-07T01:46:07.573485334Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:07.592239 containerd[1510]: time="2026-03-07T01:46:07.592134358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:07.593517 containerd[1510]: time="2026-03-07T01:46:07.593449415Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.678602767s" Mar 7 01:46:07.593615 containerd[1510]: time="2026-03-07T01:46:07.593524508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:46:07.594776 containerd[1510]: time="2026-03-07T01:46:07.594561445Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:46:07.613707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:46:07.631034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:46:08.153348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:08.166300 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:46:08.260518 kubelet[2025]: E0307 01:46:08.260361 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:46:08.262812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:46:08.263112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:46:08.500480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650140100.mount: Deactivated successfully. Mar 7 01:46:10.589978 containerd[1510]: time="2026-03-07T01:46:10.589742560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:10.593339 containerd[1510]: time="2026-03-07T01:46:10.593256774Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556550" Mar 7 01:46:10.596687 containerd[1510]: time="2026-03-07T01:46:10.595258307Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:10.609994 containerd[1510]: time="2026-03-07T01:46:10.609859800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:10.614196 containerd[1510]: time="2026-03-07T01:46:10.614106850Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.019457737s" Mar 7 01:46:10.614359 containerd[1510]: time="2026-03-07T01:46:10.614215378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:46:10.618057 containerd[1510]: time="2026-03-07T01:46:10.618021919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:46:11.286761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754503119.mount: Deactivated successfully. Mar 7 01:46:11.295877 containerd[1510]: time="2026-03-07T01:46:11.295819155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:11.301946 containerd[1510]: time="2026-03-07T01:46:11.301862386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 7 01:46:11.308899 containerd[1510]: time="2026-03-07T01:46:11.308812833Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:11.313958 containerd[1510]: time="2026-03-07T01:46:11.313875502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:11.315546 containerd[1510]: time="2026-03-07T01:46:11.315309758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 696.997794ms" Mar 7 01:46:11.315546 containerd[1510]: time="2026-03-07T01:46:11.315359909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:46:11.317195 containerd[1510]: time="2026-03-07T01:46:11.317141028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:46:11.921820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193464432.mount: Deactivated successfully. Mar 7 01:46:14.796396 containerd[1510]: time="2026-03-07T01:46:14.796302428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:14.798331 containerd[1510]: time="2026-03-07T01:46:14.798272175Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630330" Mar 7 01:46:14.799571 containerd[1510]: time="2026-03-07T01:46:14.799488347Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:14.804704 containerd[1510]: time="2026-03-07T01:46:14.804212018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:14.808424 containerd[1510]: time="2026-03-07T01:46:14.808361357Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.491173575s" Mar 7 01:46:14.808598 containerd[1510]: time="2026-03-07T01:46:14.808570166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:46:16.881832 update_engine[1488]: I20260307 01:46:16.880118 1488 update_attempter.cc:509] Updating boot flags... Mar 7 01:46:16.956711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2179) Mar 7 01:46:17.069703 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2180) Mar 7 01:46:17.568517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:17.584966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:46:17.621726 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-11.scope)... Mar 7 01:46:17.622018 systemd[1]: Reloading... Mar 7 01:46:17.833819 zram_generator::config[2229]: No configuration found. Mar 7 01:46:18.039587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:46:18.154737 systemd[1]: Reloading finished in 531 ms. Mar 7 01:46:18.231178 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:46:18.231647 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:46:18.232353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:18.246148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:46:18.570379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:18.585181 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:46:18.709129 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:46:19.490845 kubelet[2299]: I0307 01:46:19.490745 2299 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:46:19.490845 kubelet[2299]: I0307 01:46:19.490827 2299 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:46:19.492391 kubelet[2299]: I0307 01:46:19.492348 2299 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:46:19.492391 kubelet[2299]: I0307 01:46:19.492374 2299 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:46:19.492731 kubelet[2299]: I0307 01:46:19.492707 2299 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:46:19.506790 kubelet[2299]: I0307 01:46:19.506591 2299 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:46:19.512395 kubelet[2299]: E0307 01:46:19.512350 2299 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.57.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.57.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:46:19.518680 kubelet[2299]: E0307 01:46:19.517319 2299 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:46:19.518680 kubelet[2299]: I0307 01:46:19.517411 2299 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:46:19.525868 kubelet[2299]: I0307 01:46:19.525818 2299 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:46:19.527378 kubelet[2299]: I0307 01:46:19.527313 2299 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:46:19.527645 kubelet[2299]: I0307 01:46:19.527365 2299 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-wuc9t.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:46:19.528015 kubelet[2299]: I0307 01:46:19.527691 2299 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:46:19.528015 kubelet[2299]: I0307 01:46:19.527712 2299 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:46:19.528015 kubelet[2299]: I0307 01:46:19.527893 2299 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:46:19.531011 kubelet[2299]: I0307 01:46:19.530973 2299 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:46:19.531498 kubelet[2299]: I0307 01:46:19.531467 2299 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:46:19.531498 kubelet[2299]: I0307 01:46:19.531504 2299 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:46:19.531645 kubelet[2299]: I0307 01:46:19.531589 2299 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:46:19.531645 kubelet[2299]: I0307 01:46:19.531619 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:46:19.535688 kubelet[2299]: I0307 01:46:19.535072 2299 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:46:19.537979 kubelet[2299]: I0307 01:46:19.537944 2299 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:46:19.538068 kubelet[2299]: I0307 01:46:19.537993 2299 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:46:19.538145 kubelet[2299]: W0307 01:46:19.538121 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:46:19.546069 kubelet[2299]: I0307 01:46:19.546034 2299 server.go:1257] "Started kubelet" Mar 7 01:46:19.546471 kubelet[2299]: I0307 01:46:19.546418 2299 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:46:19.548911 kubelet[2299]: I0307 01:46:19.548873 2299 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:46:19.552869 kubelet[2299]: I0307 01:46:19.552774 2299 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:46:19.552967 kubelet[2299]: I0307 01:46:19.552907 2299 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:46:19.553616 kubelet[2299]: I0307 01:46:19.553575 2299 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:46:19.555262 kubelet[2299]: E0307 01:46:19.553830 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.57.158:6443/api/v1/namespaces/default/events\": dial tcp 10.230.57.158:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-wuc9t.gb1.brightbox.com.189a6bd82d9b7bb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-wuc9t.gb1.brightbox.com,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-wuc9t.gb1.brightbox.com,},FirstTimestamp:2026-03-07 01:46:19.545983929 +0000 UTC m=+0.919006874,LastTimestamp:2026-03-07 01:46:19.545983929 +0000 UTC m=+0.919006874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-wuc9t.gb1.brightbox.com,}" Mar 7 01:46:19.562756 kubelet[2299]: I0307 01:46:19.562723 2299 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:46:19.565689 kubelet[2299]: I0307 01:46:19.564449 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:46:19.570154 kubelet[2299]: E0307 01:46:19.570108 2299 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" Mar 7 01:46:19.570398 kubelet[2299]: I0307 01:46:19.570358 2299 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:46:19.571008 kubelet[2299]: E0307 01:46:19.570967 2299 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wuc9t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.158:6443: connect: connection refused" interval="200ms" Mar 7 01:46:19.571138 kubelet[2299]: I0307 01:46:19.571117 2299 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:46:19.571931 kubelet[2299]: I0307 01:46:19.571904 2299 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:46:19.572901 kubelet[2299]: E0307 01:46:19.572865 2299 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:46:19.576939 kubelet[2299]: I0307 01:46:19.576899 2299 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:46:19.576939 kubelet[2299]: I0307 01:46:19.576926 2299 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:46:19.577088 kubelet[2299]: I0307 01:46:19.577032 2299 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:46:19.607141 kubelet[2299]: I0307 01:46:19.606303 2299 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:46:19.607141 kubelet[2299]: I0307 01:46:19.606330 2299 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:46:19.607141 kubelet[2299]: I0307 01:46:19.606371 2299 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:46:19.608407 kubelet[2299]: I0307 01:46:19.608217 2299 policy_none.go:50] "Start" Mar 7 01:46:19.608407 kubelet[2299]: I0307 01:46:19.608263 2299 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:46:19.608407 kubelet[2299]: I0307 01:46:19.608312 2299 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:46:19.613814 kubelet[2299]: I0307 01:46:19.613546 2299 policy_none.go:44] "Start" Mar 7 01:46:19.617360 kubelet[2299]: I0307 01:46:19.617152 2299 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:46:19.620027 kubelet[2299]: I0307 01:46:19.619506 2299 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:46:19.620027 kubelet[2299]: I0307 01:46:19.619557 2299 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:46:19.620027 kubelet[2299]: I0307 01:46:19.619618 2299 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:46:19.620027 kubelet[2299]: E0307 01:46:19.619721 2299 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:46:19.628284 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:46:19.644325 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:46:19.652050 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:46:19.663066 kubelet[2299]: E0307 01:46:19.661228 2299 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:46:19.663066 kubelet[2299]: I0307 01:46:19.661556 2299 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:46:19.663066 kubelet[2299]: I0307 01:46:19.661590 2299 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:46:19.663351 kubelet[2299]: E0307 01:46:19.663109 2299 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:46:19.663351 kubelet[2299]: E0307 01:46:19.663183 2299 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-wuc9t.gb1.brightbox.com\" not found" Mar 7 01:46:19.663738 kubelet[2299]: I0307 01:46:19.663712 2299 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:46:19.740538 systemd[1]: Created slice kubepods-burstable-poda08dd48ba0f92c5d8c3eb563fa47ca4b.slice - libcontainer container kubepods-burstable-poda08dd48ba0f92c5d8c3eb563fa47ca4b.slice. Mar 7 01:46:19.757941 kubelet[2299]: E0307 01:46:19.756467 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.765033 systemd[1]: Created slice kubepods-burstable-pod83ac83f2d67e94a8de578deafbea8d49.slice - libcontainer container kubepods-burstable-pod83ac83f2d67e94a8de578deafbea8d49.slice. Mar 7 01:46:19.768387 kubelet[2299]: I0307 01:46:19.767937 2299 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.768607 kubelet[2299]: E0307 01:46:19.768401 2299 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.57.158:6443/api/v1/nodes\": dial tcp 10.230.57.158:6443: connect: connection refused" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.770313 kubelet[2299]: E0307 01:46:19.770250 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.772742 kubelet[2299]: I0307 01:46:19.772537 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.772742 kubelet[2299]: I0307 01:46:19.772585 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-k8s-certs\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.772742 kubelet[2299]: I0307 01:46:19.772617 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.772742 kubelet[2299]: I0307 01:46:19.772694 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-k8s-certs\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.772742 kubelet[2299]: I0307 01:46:19.772728 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-ca-certs\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.773424 kubelet[2299]: I0307 01:46:19.772755 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-flexvolume-dir\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.773424 kubelet[2299]: I0307 01:46:19.772799 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-kubeconfig\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.773424 kubelet[2299]: I0307 01:46:19.772833 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3583c904e471d1d43ba94d6e4535b6e3-kubeconfig\") pod \"kube-scheduler-srv-wuc9t.gb1.brightbox.com\" (UID: \"3583c904e471d1d43ba94d6e4535b6e3\") " pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.773424 kubelet[2299]: I0307 01:46:19.772871 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-ca-certs\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.773424 kubelet[2299]: E0307 01:46:19.773181 2299 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wuc9t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.158:6443: connect: connection refused" interval="400ms" Mar 7 01:46:19.774516 systemd[1]: Created slice kubepods-burstable-pod3583c904e471d1d43ba94d6e4535b6e3.slice - libcontainer container kubepods-burstable-pod3583c904e471d1d43ba94d6e4535b6e3.slice. Mar 7 01:46:19.777360 kubelet[2299]: E0307 01:46:19.777051 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.971680 kubelet[2299]: I0307 01:46:19.971604 2299 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:19.972106 kubelet[2299]: E0307 01:46:19.972063 2299 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.57.158:6443/api/v1/nodes\": dial tcp 10.230.57.158:6443: connect: connection refused" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:20.066546 containerd[1510]: time="2026-03-07T01:46:20.066315644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-wuc9t.gb1.brightbox.com,Uid:a08dd48ba0f92c5d8c3eb563fa47ca4b,Namespace:kube-system,Attempt:0,}" Mar 7 01:46:20.076154 containerd[1510]: time="2026-03-07T01:46:20.075497492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-wuc9t.gb1.brightbox.com,Uid:83ac83f2d67e94a8de578deafbea8d49,Namespace:kube-system,Attempt:0,}" Mar 7 01:46:20.080701 containerd[1510]: time="2026-03-07T01:46:20.080505651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-wuc9t.gb1.brightbox.com,Uid:3583c904e471d1d43ba94d6e4535b6e3,Namespace:kube-system,Attempt:0,}" Mar 7 01:46:20.174148 kubelet[2299]: E0307 01:46:20.174049 2299 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wuc9t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.158:6443: connect: connection refused" interval="800ms" Mar 7 01:46:20.375744 kubelet[2299]: I0307 01:46:20.375639 2299 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:20.376201 kubelet[2299]: E0307 01:46:20.376159 2299 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.57.158:6443/api/v1/nodes\": dial tcp 10.230.57.158:6443: connect: connection refused" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:20.685341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668280012.mount: Deactivated successfully. Mar 7 01:46:20.693691 containerd[1510]: time="2026-03-07T01:46:20.692507571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:46:20.695855 containerd[1510]: time="2026-03-07T01:46:20.695760686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:46:20.696973 containerd[1510]: time="2026-03-07T01:46:20.696647500Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:46:20.698059 containerd[1510]: time="2026-03-07T01:46:20.698000061Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:46:20.700439 containerd[1510]: time="2026-03-07T01:46:20.700273578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 7 01:46:20.701583 containerd[1510]: time="2026-03-07T01:46:20.701524818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:46:20.701840 containerd[1510]: time="2026-03-07T01:46:20.701804588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:46:20.704451 containerd[1510]: time="2026-03-07T01:46:20.704364063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:46:20.709395 containerd[1510]: time="2026-03-07T01:46:20.708876845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.280289ms" Mar 7 01:46:20.712686 containerd[1510]: time="2026-03-07T01:46:20.712430702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.821578ms" Mar 7 01:46:20.716097 containerd[1510]: time="2026-03-07T01:46:20.716041378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.455375ms" Mar 7 01:46:20.977327 kubelet[2299]: E0307 01:46:20.976585 2299 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.57.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wuc9t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.57.158:6443: connect: connection refused" interval="1.6s" Mar 7 01:46:21.032025 containerd[1510]: time="2026-03-07T01:46:21.031621585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:21.032025 containerd[1510]: time="2026-03-07T01:46:21.031767725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:21.032025 containerd[1510]: time="2026-03-07T01:46:21.031795277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.032025 containerd[1510]: time="2026-03-07T01:46:21.031946882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.061737 containerd[1510]: time="2026-03-07T01:46:21.058960278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:21.061737 containerd[1510]: time="2026-03-07T01:46:21.059126335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:21.061737 containerd[1510]: time="2026-03-07T01:46:21.059149379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.061737 containerd[1510]: time="2026-03-07T01:46:21.059794037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.073027 containerd[1510]: time="2026-03-07T01:46:21.072467656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:21.073027 containerd[1510]: time="2026-03-07T01:46:21.072643615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:21.073027 containerd[1510]: time="2026-03-07T01:46:21.072682759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.073027 containerd[1510]: time="2026-03-07T01:46:21.072849510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:21.090976 systemd[1]: Started cri-containerd-d11303bad4223097a1f61031df14e60a2fc8e891b58eacd588d952620ef37521.scope - libcontainer container d11303bad4223097a1f61031df14e60a2fc8e891b58eacd588d952620ef37521. Mar 7 01:46:21.125935 systemd[1]: Started cri-containerd-07b199e07210ab959fc2d115ba2fad205d1bbf53a3e48d6ded6a0f8162d3342d.scope - libcontainer container 07b199e07210ab959fc2d115ba2fad205d1bbf53a3e48d6ded6a0f8162d3342d. Mar 7 01:46:21.147051 systemd[1]: Started cri-containerd-84a825c4b2ea5fd9bc6d7c3c3703cde2540267fd5451fdbd191b336d457084f3.scope - libcontainer container 84a825c4b2ea5fd9bc6d7c3c3703cde2540267fd5451fdbd191b336d457084f3. Mar 7 01:46:21.182879 kubelet[2299]: I0307 01:46:21.182263 2299 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:21.182879 kubelet[2299]: E0307 01:46:21.182829 2299 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.57.158:6443/api/v1/nodes\": dial tcp 10.230.57.158:6443: connect: connection refused" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:21.250256 containerd[1510]: time="2026-03-07T01:46:21.248633719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-wuc9t.gb1.brightbox.com,Uid:83ac83f2d67e94a8de578deafbea8d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"d11303bad4223097a1f61031df14e60a2fc8e891b58eacd588d952620ef37521\"" Mar 7 01:46:21.266695 containerd[1510]: time="2026-03-07T01:46:21.266132764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-wuc9t.gb1.brightbox.com,Uid:a08dd48ba0f92c5d8c3eb563fa47ca4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b199e07210ab959fc2d115ba2fad205d1bbf53a3e48d6ded6a0f8162d3342d\"" Mar 7 01:46:21.298693 containerd[1510]: time="2026-03-07T01:46:21.298059583Z" level=info msg="CreateContainer within sandbox \"d11303bad4223097a1f61031df14e60a2fc8e891b58eacd588d952620ef37521\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:46:21.302064 containerd[1510]: time="2026-03-07T01:46:21.301748826Z" level=info msg="CreateContainer within sandbox \"07b199e07210ab959fc2d115ba2fad205d1bbf53a3e48d6ded6a0f8162d3342d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:46:21.319626 containerd[1510]: time="2026-03-07T01:46:21.319410544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-wuc9t.gb1.brightbox.com,Uid:3583c904e471d1d43ba94d6e4535b6e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a825c4b2ea5fd9bc6d7c3c3703cde2540267fd5451fdbd191b336d457084f3\"" Mar 7 01:46:21.332147 containerd[1510]: time="2026-03-07T01:46:21.330815156Z" level=info msg="CreateContainer within sandbox \"84a825c4b2ea5fd9bc6d7c3c3703cde2540267fd5451fdbd191b336d457084f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:46:21.354691 containerd[1510]: time="2026-03-07T01:46:21.352954009Z" level=info msg="CreateContainer within sandbox \"d11303bad4223097a1f61031df14e60a2fc8e891b58eacd588d952620ef37521\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de518b92638c84de49698e3a31945068a17a90fc24541fe0be646ddaecbeac6e\"" Mar 7 01:46:21.355005 containerd[1510]: time="2026-03-07T01:46:21.354957939Z" level=info msg="StartContainer for \"de518b92638c84de49698e3a31945068a17a90fc24541fe0be646ddaecbeac6e\"" Mar 7 01:46:21.358873 containerd[1510]: time="2026-03-07T01:46:21.358824880Z" level=info msg="CreateContainer within sandbox \"07b199e07210ab959fc2d115ba2fad205d1bbf53a3e48d6ded6a0f8162d3342d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67485f4f6868452ea1fc2d69145c691cb9453cce3a40ce16208cd020c2d553e8\"" Mar 7 01:46:21.360687 containerd[1510]: time="2026-03-07T01:46:21.359931333Z" level=info msg="StartContainer for \"67485f4f6868452ea1fc2d69145c691cb9453cce3a40ce16208cd020c2d553e8\"" Mar 7 01:46:21.370529 containerd[1510]: time="2026-03-07T01:46:21.370441453Z" level=info msg="CreateContainer within sandbox \"84a825c4b2ea5fd9bc6d7c3c3703cde2540267fd5451fdbd191b336d457084f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c8fb7f117f61d61976512729471ad149f1951a7f1c96c7772695d5c5482ded4\"" Mar 7 01:46:21.389103 containerd[1510]: time="2026-03-07T01:46:21.371802629Z" level=info msg="StartContainer for \"5c8fb7f117f61d61976512729471ad149f1951a7f1c96c7772695d5c5482ded4\"" Mar 7 01:46:21.466933 systemd[1]: Started cri-containerd-5c8fb7f117f61d61976512729471ad149f1951a7f1c96c7772695d5c5482ded4.scope - libcontainer container 5c8fb7f117f61d61976512729471ad149f1951a7f1c96c7772695d5c5482ded4. Mar 7 01:46:21.481925 systemd[1]: Started cri-containerd-de518b92638c84de49698e3a31945068a17a90fc24541fe0be646ddaecbeac6e.scope - libcontainer container de518b92638c84de49698e3a31945068a17a90fc24541fe0be646ddaecbeac6e. Mar 7 01:46:21.511927 systemd[1]: Started cri-containerd-67485f4f6868452ea1fc2d69145c691cb9453cce3a40ce16208cd020c2d553e8.scope - libcontainer container 67485f4f6868452ea1fc2d69145c691cb9453cce3a40ce16208cd020c2d553e8. Mar 7 01:46:21.640897 containerd[1510]: time="2026-03-07T01:46:21.639603232Z" level=info msg="StartContainer for \"5c8fb7f117f61d61976512729471ad149f1951a7f1c96c7772695d5c5482ded4\" returns successfully" Mar 7 01:46:21.640897 containerd[1510]: time="2026-03-07T01:46:21.639911979Z" level=info msg="StartContainer for \"de518b92638c84de49698e3a31945068a17a90fc24541fe0be646ddaecbeac6e\" returns successfully" Mar 7 01:46:21.657690 containerd[1510]: time="2026-03-07T01:46:21.656889435Z" level=info msg="StartContainer for \"67485f4f6868452ea1fc2d69145c691cb9453cce3a40ce16208cd020c2d553e8\" returns successfully" Mar 7 01:46:21.659056 kubelet[2299]: E0307 01:46:21.658846 2299 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.57.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.57.158:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:46:21.676802 kubelet[2299]: E0307 01:46:21.676671 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:22.678382 kubelet[2299]: E0307 01:46:22.677264 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:22.678382 kubelet[2299]: E0307 01:46:22.678061 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:22.791221 kubelet[2299]: I0307 01:46:22.791157 2299 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:23.687220 kubelet[2299]: E0307 01:46:23.687100 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:23.689324 kubelet[2299]: E0307 01:46:23.689274 2299 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.367142 kubelet[2299]: E0307 01:46:24.367047 2299 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-wuc9t.gb1.brightbox.com\" not found" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.429171 kubelet[2299]: I0307 01:46:24.428779 2299 kubelet_node_status.go:77] "Successfully registered node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.429171 kubelet[2299]: E0307 01:46:24.428858 2299 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"srv-wuc9t.gb1.brightbox.com\": node \"srv-wuc9t.gb1.brightbox.com\" not found" Mar 7 01:46:24.471972 kubelet[2299]: I0307 01:46:24.471506 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.488259 kubelet[2299]: E0307 01:46:24.488185 2299 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.488259 kubelet[2299]: I0307 01:46:24.488248 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.492489 kubelet[2299]: E0307 01:46:24.492448 2299 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.492489 kubelet[2299]: I0307 01:46:24.492487 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.495329 kubelet[2299]: E0307 01:46:24.495239 2299 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-wuc9t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.537613 kubelet[2299]: I0307 01:46:24.537009 2299 apiserver.go:52] "Watching apiserver" Mar 7 01:46:24.572722 kubelet[2299]: I0307 01:46:24.572601 2299 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:46:24.685143 kubelet[2299]: I0307 01:46:24.683446 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.685143 kubelet[2299]: I0307 01:46:24.683700 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.691631 kubelet[2299]: E0307 01:46:24.691485 2299 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:24.698374 kubelet[2299]: E0307 01:46:24.698083 2299 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-wuc9t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:26.631503 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-11.scope)... Mar 7 01:46:26.631573 systemd[1]: Reloading... Mar 7 01:46:26.778905 zram_generator::config[2628]: No configuration found. Mar 7 01:46:26.988732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:46:27.003108 kubelet[2299]: I0307 01:46:27.002197 2299 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:27.019438 kubelet[2299]: I0307 01:46:27.018802 2299 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:27.134969 systemd[1]: Reloading finished in 502 ms. Mar 7 01:46:27.203053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:46:27.230494 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:46:27.231787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:27.231928 systemd[1]: kubelet.service: Consumed 1.744s CPU time, 123.7M memory peak, 0B memory swap peak. Mar 7 01:46:27.243902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:46:27.652753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:46:27.670540 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:46:27.791922 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:46:27.810019 kubelet[2692]: I0307 01:46:27.809819 2692 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:46:27.810019 kubelet[2692]: I0307 01:46:27.809903 2692 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:46:27.812683 kubelet[2692]: I0307 01:46:27.811790 2692 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:46:27.812683 kubelet[2692]: I0307 01:46:27.811815 2692 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:46:27.815134 kubelet[2692]: I0307 01:46:27.815105 2692 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:46:27.819293 kubelet[2692]: I0307 01:46:27.819256 2692 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:46:27.827786 kubelet[2692]: I0307 01:46:27.826645 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:46:27.843133 kubelet[2692]: E0307 01:46:27.843073 2692 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:46:27.843558 kubelet[2692]: I0307 01:46:27.843535 2692 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:46:27.855304 kubelet[2692]: I0307 01:46:27.855072 2692 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:46:27.856683 kubelet[2692]: I0307 01:46:27.855935 2692 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:46:27.856683 kubelet[2692]: I0307 01:46:27.855992 2692 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-wuc9t.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:46:27.856683 kubelet[2692]: I0307 01:46:27.856287 2692 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:46:27.856683 kubelet[2692]: I0307 01:46:27.856306 2692 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:46:27.857159 kubelet[2692]: I0307 01:46:27.856356 2692 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:46:27.857469 kubelet[2692]: I0307 01:46:27.857446 2692 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:46:27.858025 kubelet[2692]: I0307 01:46:27.857996 2692 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:46:27.860690 kubelet[2692]: I0307 01:46:27.859424 2692 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:46:27.860690 kubelet[2692]: I0307 01:46:27.859489 2692 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:46:27.860690 kubelet[2692]: I0307 01:46:27.859508 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:46:27.864956 kubelet[2692]: I0307 01:46:27.864895 2692 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:46:27.868859 kubelet[2692]: I0307 01:46:27.868831 2692 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:46:27.869029 kubelet[2692]: I0307 01:46:27.869009 2692 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:46:27.895726 kubelet[2692]: I0307 01:46:27.895693 2692 server.go:1257] "Started kubelet" Mar 7 01:46:27.906724 kubelet[2692]: I0307 01:46:27.905170 2692 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:46:27.911486 kubelet[2692]: I0307 01:46:27.911406 2692 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:46:27.917763 kubelet[2692]: I0307 01:46:27.917708 2692 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:46:27.925322 kubelet[2692]: I0307 01:46:27.918308 2692 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:46:27.931102 kubelet[2692]: I0307 01:46:27.919312 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:46:27.931443 kubelet[2692]: I0307 01:46:27.931327 2692 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:46:27.935127 kubelet[2692]: I0307 01:46:27.922077 2692 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:46:27.935864 kubelet[2692]: I0307 01:46:27.935824 2692 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:46:27.936372 kubelet[2692]: I0307 01:46:27.936349 2692 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:46:27.941020 kubelet[2692]: I0307 01:46:27.940992 2692 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:46:27.965781 kubelet[2692]: I0307 01:46:27.964459 2692 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:46:27.973111 kubelet[2692]: I0307 01:46:27.973079 2692 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:46:27.973433 kubelet[2692]: I0307 01:46:27.973271 2692 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:46:27.999172 kubelet[2692]: E0307 01:46:27.999100 2692 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:46:28.017155 kubelet[2692]: I0307 01:46:28.016999 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:46:28.022715 kubelet[2692]: I0307 01:46:28.021765 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:46:28.023188 kubelet[2692]: I0307 01:46:28.023167 2692 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:46:28.025693 kubelet[2692]: I0307 01:46:28.023733 2692 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:46:28.025693 kubelet[2692]: E0307 01:46:28.023859 2692 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:46:28.117592 kubelet[2692]: I0307 01:46:28.117555 2692 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:46:28.119001 kubelet[2692]: I0307 01:46:28.118954 2692 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:46:28.119093 kubelet[2692]: I0307 01:46:28.119008 2692 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:46:28.119272 kubelet[2692]: I0307 01:46:28.119231 2692 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:46:28.119368 kubelet[2692]: I0307 01:46:28.119296 2692 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:46:28.119368 kubelet[2692]: I0307 01:46:28.119350 2692 policy_none.go:50] "Start" Mar 7 01:46:28.119486 kubelet[2692]: I0307 01:46:28.119382 2692 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:46:28.119486 kubelet[2692]: I0307 01:46:28.119418 2692 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:46:28.122739 kubelet[2692]: I0307 01:46:28.122701 2692 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:46:28.122846 kubelet[2692]: I0307 01:46:28.122761 2692 policy_none.go:44] "Start" Mar 7 01:46:28.124067 kubelet[2692]: E0307 01:46:28.124029 2692 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:46:28.136763 kubelet[2692]: E0307 01:46:28.135575 2692 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:46:28.136763 kubelet[2692]: I0307 01:46:28.136001 2692 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:46:28.136763 kubelet[2692]: I0307 01:46:28.136042 2692 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:46:28.136763 kubelet[2692]: I0307 01:46:28.136601 2692 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:46:28.143490 kubelet[2692]: E0307 01:46:28.142920 2692 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:46:28.266902 kubelet[2692]: I0307 01:46:28.266545 2692 kubelet_node_status.go:74] "Attempting to register node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.284430 kubelet[2692]: I0307 01:46:28.284376 2692 kubelet_node_status.go:123] "Node was previously registered" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.284986 kubelet[2692]: I0307 01:46:28.284785 2692 kubelet_node_status.go:77] "Successfully registered node" node="srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.328973 kubelet[2692]: I0307 01:46:28.328820 2692 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.330456 kubelet[2692]: I0307 01:46:28.330303 2692 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.334618 kubelet[2692]: I0307 01:46:28.334423 2692 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.341140 kubelet[2692]: I0307 01:46:28.340678 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-ca-certs\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.341140 kubelet[2692]: I0307 01:46:28.340731 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-k8s-certs\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.341140 kubelet[2692]: I0307 01:46:28.340770 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a08dd48ba0f92c5d8c3eb563fa47ca4b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" (UID: \"a08dd48ba0f92c5d8c3eb563fa47ca4b\") " pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.341140 kubelet[2692]: I0307 01:46:28.340808 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.341140 kubelet[2692]: I0307 01:46:28.340868 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-ca-certs\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.342849 kubelet[2692]: I0307 01:46:28.340896 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-flexvolume-dir\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.342849 kubelet[2692]: I0307 01:46:28.340952 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-k8s-certs\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.342849 kubelet[2692]: I0307 01:46:28.340982 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83ac83f2d67e94a8de578deafbea8d49-kubeconfig\") pod \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" (UID: \"83ac83f2d67e94a8de578deafbea8d49\") " pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.342849 kubelet[2692]: I0307 01:46:28.341043 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3583c904e471d1d43ba94d6e4535b6e3-kubeconfig\") pod \"kube-scheduler-srv-wuc9t.gb1.brightbox.com\" (UID: \"3583c904e471d1d43ba94d6e4535b6e3\") " pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.350726 kubelet[2692]: I0307 01:46:28.350005 2692 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:28.350726 kubelet[2692]: E0307 01:46:28.350109 2692 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-wuc9t.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:28.350726 kubelet[2692]: I0307 01:46:28.350252 2692 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:28.350726 kubelet[2692]: I0307 01:46:28.350597 2692 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:28.862533 kubelet[2692]: I0307 01:46:28.862422 2692 apiserver.go:52] "Watching apiserver" Mar 7 01:46:28.936557 kubelet[2692]: I0307 01:46:28.936499 2692 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:46:29.069073 kubelet[2692]: I0307 01:46:29.069033 2692 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:29.070057 kubelet[2692]: I0307 01:46:29.070033 2692 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:29.088586 kubelet[2692]: I0307 01:46:29.088519 2692 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:29.088910 kubelet[2692]: E0307 01:46:29.088607 2692 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-wuc9t.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:29.091765 kubelet[2692]: I0307 01:46:29.091714 2692 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:46:29.091879 kubelet[2692]: E0307 01:46:29.091781 2692 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-wuc9t.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" Mar 7 01:46:30.082860 kubelet[2692]: I0307 01:46:30.081623 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-wuc9t.gb1.brightbox.com" podStartSLOduration=2.081565745 podStartE2EDuration="2.081565745s" podCreationTimestamp="2026-03-07 01:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:46:30.081443041 +0000 UTC m=+2.394880359" watchObservedRunningTime="2026-03-07 01:46:30.081565745 +0000 UTC m=+2.395003066" Mar 7 01:46:30.089145 kubelet[2692]: I0307 01:46:30.089048 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-wuc9t.gb1.brightbox.com" podStartSLOduration=2.089027637 podStartE2EDuration="2.089027637s" podCreationTimestamp="2026-03-07 01:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:46:30.064441314 +0000 UTC m=+2.377878636" watchObservedRunningTime="2026-03-07 01:46:30.089027637 +0000 UTC m=+2.402464964" Mar 7 01:46:30.127723 kubelet[2692]: I0307 01:46:30.127276 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-wuc9t.gb1.brightbox.com" podStartSLOduration=3.127224612 podStartE2EDuration="3.127224612s" podCreationTimestamp="2026-03-07 01:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:46:30.104610115 +0000 UTC m=+2.418047452" watchObservedRunningTime="2026-03-07 01:46:30.127224612 +0000 UTC m=+2.440661937" Mar 7 01:46:32.210013 kubelet[2692]: I0307 01:46:32.209959 2692 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:46:32.211136 containerd[1510]: time="2026-03-07T01:46:32.210992237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:46:32.213779 kubelet[2692]: I0307 01:46:32.211383 2692 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:46:32.965482 systemd[1]: Created slice kubepods-besteffort-pod902fb01a_a870_42f8_9f0f_88d7d3c9fd03.slice - libcontainer container kubepods-besteffort-pod902fb01a_a870_42f8_9f0f_88d7d3c9fd03.slice. Mar 7 01:46:32.971534 kubelet[2692]: I0307 01:46:32.971067 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/902fb01a-a870-42f8-9f0f-88d7d3c9fd03-lib-modules\") pod \"kube-proxy-sj5jh\" (UID: \"902fb01a-a870-42f8-9f0f-88d7d3c9fd03\") " pod="kube-system/kube-proxy-sj5jh" Mar 7 01:46:32.971534 kubelet[2692]: I0307 01:46:32.971139 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-859sz\" (UniqueName: \"kubernetes.io/projected/902fb01a-a870-42f8-9f0f-88d7d3c9fd03-kube-api-access-859sz\") pod \"kube-proxy-sj5jh\" (UID: \"902fb01a-a870-42f8-9f0f-88d7d3c9fd03\") " pod="kube-system/kube-proxy-sj5jh" Mar 7 01:46:32.971534 kubelet[2692]: I0307 01:46:32.971194 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/902fb01a-a870-42f8-9f0f-88d7d3c9fd03-kube-proxy\") pod \"kube-proxy-sj5jh\" (UID: \"902fb01a-a870-42f8-9f0f-88d7d3c9fd03\") " pod="kube-system/kube-proxy-sj5jh" Mar 7 01:46:32.971534 kubelet[2692]: I0307 01:46:32.971251 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/902fb01a-a870-42f8-9f0f-88d7d3c9fd03-xtables-lock\") pod \"kube-proxy-sj5jh\" (UID: \"902fb01a-a870-42f8-9f0f-88d7d3c9fd03\") " pod="kube-system/kube-proxy-sj5jh" Mar 7 01:46:33.284781 containerd[1510]: time="2026-03-07T01:46:33.282641848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj5jh,Uid:902fb01a-a870-42f8-9f0f-88d7d3c9fd03,Namespace:kube-system,Attempt:0,}" Mar 7 01:46:33.377246 containerd[1510]: time="2026-03-07T01:46:33.376129584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:33.377246 containerd[1510]: time="2026-03-07T01:46:33.376318307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:33.377246 containerd[1510]: time="2026-03-07T01:46:33.376355576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:33.377246 containerd[1510]: time="2026-03-07T01:46:33.376605306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:33.439751 systemd[1]: Started cri-containerd-df19bbc50ead9567503c21494dac63625cc93c66b353ca4b1e489a2738c6aaca.scope - libcontainer container df19bbc50ead9567503c21494dac63625cc93c66b353ca4b1e489a2738c6aaca. Mar 7 01:46:33.517049 systemd[1]: Created slice kubepods-besteffort-podf65fe23e_f44e_45af_841d_7259631cff44.slice - libcontainer container kubepods-besteffort-podf65fe23e_f44e_45af_841d_7259631cff44.slice. Mar 7 01:46:33.544476 containerd[1510]: time="2026-03-07T01:46:33.544294616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj5jh,Uid:902fb01a-a870-42f8-9f0f-88d7d3c9fd03,Namespace:kube-system,Attempt:0,} returns sandbox id \"df19bbc50ead9567503c21494dac63625cc93c66b353ca4b1e489a2738c6aaca\"" Mar 7 01:46:33.557737 containerd[1510]: time="2026-03-07T01:46:33.557499318Z" level=info msg="CreateContainer within sandbox \"df19bbc50ead9567503c21494dac63625cc93c66b353ca4b1e489a2738c6aaca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:46:33.574922 kubelet[2692]: I0307 01:46:33.574853 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f65fe23e-f44e-45af-841d-7259631cff44-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-k72cd\" (UID: \"f65fe23e-f44e-45af-841d-7259631cff44\") " pod="tigera-operator/tigera-operator-6cf4cccc57-k72cd" Mar 7 01:46:33.575655 kubelet[2692]: I0307 01:46:33.575576 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4xz7\" (UniqueName: \"kubernetes.io/projected/f65fe23e-f44e-45af-841d-7259631cff44-kube-api-access-f4xz7\") pod \"tigera-operator-6cf4cccc57-k72cd\" (UID: \"f65fe23e-f44e-45af-841d-7259631cff44\") " pod="tigera-operator/tigera-operator-6cf4cccc57-k72cd" Mar 7 01:46:33.603212 containerd[1510]: time="2026-03-07T01:46:33.602997743Z" level=info msg="CreateContainer within sandbox \"df19bbc50ead9567503c21494dac63625cc93c66b353ca4b1e489a2738c6aaca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff1d5d83f404597140a24fd280d03b2e099b726e5777e1afff3bba1a757521fa\"" Mar 7 01:46:33.605397 containerd[1510]: time="2026-03-07T01:46:33.605347561Z" level=info msg="StartContainer for \"ff1d5d83f404597140a24fd280d03b2e099b726e5777e1afff3bba1a757521fa\"" Mar 7 01:46:33.658003 systemd[1]: Started cri-containerd-ff1d5d83f404597140a24fd280d03b2e099b726e5777e1afff3bba1a757521fa.scope - libcontainer container ff1d5d83f404597140a24fd280d03b2e099b726e5777e1afff3bba1a757521fa. Mar 7 01:46:33.737971 containerd[1510]: time="2026-03-07T01:46:33.737751096Z" level=info msg="StartContainer for \"ff1d5d83f404597140a24fd280d03b2e099b726e5777e1afff3bba1a757521fa\" returns successfully" Mar 7 01:46:33.838401 containerd[1510]: time="2026-03-07T01:46:33.837844574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-k72cd,Uid:f65fe23e-f44e-45af-841d-7259631cff44,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:46:33.895727 containerd[1510]: time="2026-03-07T01:46:33.895527891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:33.895727 containerd[1510]: time="2026-03-07T01:46:33.895634913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:33.895727 containerd[1510]: time="2026-03-07T01:46:33.895690977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:33.896548 containerd[1510]: time="2026-03-07T01:46:33.896261560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:33.930889 systemd[1]: Started cri-containerd-8e0f3ee2c441b472e72469c9c751ab7d2b1839041b37b1635aebd35aff45daaa.scope - libcontainer container 8e0f3ee2c441b472e72469c9c751ab7d2b1839041b37b1635aebd35aff45daaa. Mar 7 01:46:34.009571 containerd[1510]: time="2026-03-07T01:46:34.009466044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-k72cd,Uid:f65fe23e-f44e-45af-841d-7259631cff44,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e0f3ee2c441b472e72469c9c751ab7d2b1839041b37b1635aebd35aff45daaa\"" Mar 7 01:46:34.015339 containerd[1510]: time="2026-03-07T01:46:34.015292936Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:46:37.136705 kubelet[2692]: I0307 01:46:37.134977 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-sj5jh" podStartSLOduration=5.134947599 podStartE2EDuration="5.134947599s" podCreationTimestamp="2026-03-07 01:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:46:34.14338697 +0000 UTC m=+6.456824298" watchObservedRunningTime="2026-03-07 01:46:37.134947599 +0000 UTC m=+9.448384982" Mar 7 01:46:37.151577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448955413.mount: Deactivated successfully. Mar 7 01:46:39.734461 containerd[1510]: time="2026-03-07T01:46:39.734283287Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:39.736674 containerd[1510]: time="2026-03-07T01:46:39.736310927Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:46:39.739327 containerd[1510]: time="2026-03-07T01:46:39.739235180Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:39.744399 containerd[1510]: time="2026-03-07T01:46:39.744200423Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:39.747682 containerd[1510]: time="2026-03-07T01:46:39.747205942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.731838795s" Mar 7 01:46:39.747682 containerd[1510]: time="2026-03-07T01:46:39.747349404Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:46:39.760059 containerd[1510]: time="2026-03-07T01:46:39.759999611Z" level=info msg="CreateContainer within sandbox \"8e0f3ee2c441b472e72469c9c751ab7d2b1839041b37b1635aebd35aff45daaa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:46:39.783753 containerd[1510]: time="2026-03-07T01:46:39.783698715Z" level=info msg="CreateContainer within sandbox \"8e0f3ee2c441b472e72469c9c751ab7d2b1839041b37b1635aebd35aff45daaa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5318d1a0c76930699756ad6cc8e86a7a77ba08107dda4eb606a15db5c1ae6e85\"" Mar 7 01:46:39.786584 containerd[1510]: time="2026-03-07T01:46:39.784730024Z" level=info msg="StartContainer for \"5318d1a0c76930699756ad6cc8e86a7a77ba08107dda4eb606a15db5c1ae6e85\"" Mar 7 01:46:39.880005 systemd[1]: Started cri-containerd-5318d1a0c76930699756ad6cc8e86a7a77ba08107dda4eb606a15db5c1ae6e85.scope - libcontainer container 5318d1a0c76930699756ad6cc8e86a7a77ba08107dda4eb606a15db5c1ae6e85. Mar 7 01:46:39.943903 containerd[1510]: time="2026-03-07T01:46:39.943640683Z" level=info msg="StartContainer for \"5318d1a0c76930699756ad6cc8e86a7a77ba08107dda4eb606a15db5c1ae6e85\" returns successfully" Mar 7 01:46:40.125930 kubelet[2692]: I0307 01:46:40.125718 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-k72cd" podStartSLOduration=1.391139706 podStartE2EDuration="7.125605791s" podCreationTimestamp="2026-03-07 01:46:33 +0000 UTC" firstStartedPulling="2026-03-07 01:46:34.014305412 +0000 UTC m=+6.327742737" lastFinishedPulling="2026-03-07 01:46:39.748771506 +0000 UTC m=+12.062208822" observedRunningTime="2026-03-07 01:46:40.124198851 +0000 UTC m=+12.437636176" watchObservedRunningTime="2026-03-07 01:46:40.125605791 +0000 UTC m=+12.439043119" Mar 7 01:46:48.422235 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 7 01:46:48.516184 sshd[1764]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:48.529552 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:46:48.530011 systemd[1]: sshd@9-10.230.57.158:22-4.153.228.146:39336.service: Deactivated successfully. Mar 7 01:46:48.540051 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:46:48.540854 systemd[1]: session-11.scope: Consumed 6.088s CPU time, 156.9M memory peak, 0B memory swap peak. Mar 7 01:46:48.548756 systemd-logind[1487]: Removed session 11. Mar 7 01:46:54.114021 systemd[1]: Created slice kubepods-besteffort-pod4a3dbf6a_0eed_4dd7_a5e2_861febd23bd9.slice - libcontainer container kubepods-besteffort-pod4a3dbf6a_0eed_4dd7_a5e2_861febd23bd9.slice. Mar 7 01:46:54.143220 kubelet[2692]: I0307 01:46:54.143087 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9-typha-certs\") pod \"calico-typha-6c4c6c4fdb-ghkpq\" (UID: \"4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9\") " pod="calico-system/calico-typha-6c4c6c4fdb-ghkpq" Mar 7 01:46:54.143220 kubelet[2692]: I0307 01:46:54.143210 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlzd\" (UniqueName: \"kubernetes.io/projected/4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9-kube-api-access-4zlzd\") pod \"calico-typha-6c4c6c4fdb-ghkpq\" (UID: \"4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9\") " pod="calico-system/calico-typha-6c4c6c4fdb-ghkpq" Mar 7 01:46:54.144406 kubelet[2692]: I0307 01:46:54.143302 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9-tigera-ca-bundle\") pod \"calico-typha-6c4c6c4fdb-ghkpq\" (UID: \"4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9\") " pod="calico-system/calico-typha-6c4c6c4fdb-ghkpq" Mar 7 01:46:54.293566 systemd[1]: Created slice kubepods-besteffort-poddf98ecd1_6a67_4f36_8877_1fd0255b1411.slice - libcontainer container kubepods-besteffort-poddf98ecd1_6a67_4f36_8877_1fd0255b1411.slice. Mar 7 01:46:54.345535 kubelet[2692]: I0307 01:46:54.345404 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-lib-modules\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346590 kubelet[2692]: I0307 01:46:54.345563 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-var-lib-calico\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346590 kubelet[2692]: I0307 01:46:54.345643 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-policysync\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346590 kubelet[2692]: I0307 01:46:54.345768 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-xtables-lock\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346590 kubelet[2692]: I0307 01:46:54.345805 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-bpffs\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346590 kubelet[2692]: I0307 01:46:54.345851 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-flexvol-driver-host\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346940 kubelet[2692]: I0307 01:46:54.345916 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df98ecd1-6a67-4f36-8877-1fd0255b1411-node-certs\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346940 kubelet[2692]: I0307 01:46:54.346036 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-sys-fs\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346940 kubelet[2692]: I0307 01:46:54.346104 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcd2\" (UniqueName: \"kubernetes.io/projected/df98ecd1-6a67-4f36-8877-1fd0255b1411-kube-api-access-bpcd2\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346940 kubelet[2692]: I0307 01:46:54.346165 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-cni-bin-dir\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.346940 kubelet[2692]: I0307 01:46:54.346221 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-cni-log-dir\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.347215 kubelet[2692]: I0307 01:46:54.346254 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-var-run-calico\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.347215 kubelet[2692]: I0307 01:46:54.346287 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-nodeproc\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.347215 kubelet[2692]: I0307 01:46:54.346373 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df98ecd1-6a67-4f36-8877-1fd0255b1411-cni-net-dir\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.347215 kubelet[2692]: I0307 01:46:54.346433 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df98ecd1-6a67-4f36-8877-1fd0255b1411-tigera-ca-bundle\") pod \"calico-node-2g5n5\" (UID: \"df98ecd1-6a67-4f36-8877-1fd0255b1411\") " pod="calico-system/calico-node-2g5n5" Mar 7 01:46:54.382813 kubelet[2692]: E0307 01:46:54.382575 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:46:54.431405 containerd[1510]: time="2026-03-07T01:46:54.430800687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4c6c4fdb-ghkpq,Uid:4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9,Namespace:calico-system,Attempt:0,}" Mar 7 01:46:54.450761 kubelet[2692]: I0307 01:46:54.447972 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbd29171-97d8-4573-957e-b074ea425f68-kubelet-dir\") pod \"csi-node-driver-7n5xl\" (UID: \"bbd29171-97d8-4573-957e-b074ea425f68\") " pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:46:54.450761 kubelet[2692]: I0307 01:46:54.448198 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bbd29171-97d8-4573-957e-b074ea425f68-registration-dir\") pod \"csi-node-driver-7n5xl\" (UID: \"bbd29171-97d8-4573-957e-b074ea425f68\") " pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:46:54.450761 kubelet[2692]: I0307 01:46:54.448239 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlwr\" (UniqueName: \"kubernetes.io/projected/bbd29171-97d8-4573-957e-b074ea425f68-kube-api-access-tzlwr\") pod \"csi-node-driver-7n5xl\" (UID: \"bbd29171-97d8-4573-957e-b074ea425f68\") " pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:46:54.450761 kubelet[2692]: I0307 01:46:54.448300 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bbd29171-97d8-4573-957e-b074ea425f68-socket-dir\") pod \"csi-node-driver-7n5xl\" (UID: \"bbd29171-97d8-4573-957e-b074ea425f68\") " pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:46:54.450761 kubelet[2692]: I0307 01:46:54.448422 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bbd29171-97d8-4573-957e-b074ea425f68-varrun\") pod \"csi-node-driver-7n5xl\" (UID: \"bbd29171-97d8-4573-957e-b074ea425f68\") " pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:46:54.485260 kubelet[2692]: E0307 01:46:54.481870 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.485260 kubelet[2692]: W0307 01:46:54.481917 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.485260 kubelet[2692]: E0307 01:46:54.481984 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.519148 kubelet[2692]: E0307 01:46:54.519111 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.519148 kubelet[2692]: W0307 01:46:54.519141 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.519383 kubelet[2692]: E0307 01:46:54.519172 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.549413 kubelet[2692]: E0307 01:46:54.549368 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.549413 kubelet[2692]: W0307 01:46:54.549401 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.549649 kubelet[2692]: E0307 01:46:54.549433 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.549888 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.551268 kubelet[2692]: W0307 01:46:54.549910 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.549928 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.550297 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.551268 kubelet[2692]: W0307 01:46:54.550312 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.550328 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.550700 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.551268 kubelet[2692]: W0307 01:46:54.550715 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.550744 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.551268 kubelet[2692]: E0307 01:46:54.551120 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.552124 kubelet[2692]: W0307 01:46:54.551134 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.552124 kubelet[2692]: E0307 01:46:54.551150 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.552124 kubelet[2692]: E0307 01:46:54.551575 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.552124 kubelet[2692]: W0307 01:46:54.551590 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.552124 kubelet[2692]: E0307 01:46:54.551607 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.552477 kubelet[2692]: E0307 01:46:54.552147 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.552477 kubelet[2692]: W0307 01:46:54.552165 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.552477 kubelet[2692]: E0307 01:46:54.552181 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.553319 kubelet[2692]: E0307 01:46:54.552705 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.553319 kubelet[2692]: W0307 01:46:54.552736 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.553319 kubelet[2692]: E0307 01:46:54.552788 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.553319 kubelet[2692]: E0307 01:46:54.553198 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.553319 kubelet[2692]: W0307 01:46:54.553214 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.553319 kubelet[2692]: E0307 01:46:54.553233 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.553756 kubelet[2692]: E0307 01:46:54.553623 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.553756 kubelet[2692]: W0307 01:46:54.553638 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.553756 kubelet[2692]: E0307 01:46:54.553654 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.554492 kubelet[2692]: E0307 01:46:54.554039 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.554492 kubelet[2692]: W0307 01:46:54.554064 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.554492 kubelet[2692]: E0307 01:46:54.554081 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.554492 kubelet[2692]: E0307 01:46:54.554409 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.554492 kubelet[2692]: W0307 01:46:54.554423 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.554492 kubelet[2692]: E0307 01:46:54.554451 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.554918 kubelet[2692]: E0307 01:46:54.554809 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.554918 kubelet[2692]: W0307 01:46:54.554824 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.554918 kubelet[2692]: E0307 01:46:54.554839 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.555551 kubelet[2692]: E0307 01:46:54.555138 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.555551 kubelet[2692]: W0307 01:46:54.555170 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.555551 kubelet[2692]: E0307 01:46:54.555189 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.555551 kubelet[2692]: E0307 01:46:54.555482 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.555551 kubelet[2692]: W0307 01:46:54.555496 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.555551 kubelet[2692]: E0307 01:46:54.555512 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.555989 kubelet[2692]: E0307 01:46:54.555813 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.555989 kubelet[2692]: W0307 01:46:54.555827 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.555989 kubelet[2692]: E0307 01:46:54.555843 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.556142 kubelet[2692]: E0307 01:46:54.556123 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.556142 kubelet[2692]: W0307 01:46:54.556136 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.556682 kubelet[2692]: E0307 01:46:54.556151 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.556682 kubelet[2692]: E0307 01:46:54.556450 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.556682 kubelet[2692]: W0307 01:46:54.556464 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.556682 kubelet[2692]: E0307 01:46:54.556481 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.557335 kubelet[2692]: E0307 01:46:54.556836 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.557335 kubelet[2692]: W0307 01:46:54.556849 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.557335 kubelet[2692]: E0307 01:46:54.556864 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.557335 kubelet[2692]: E0307 01:46:54.557143 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.557335 kubelet[2692]: W0307 01:46:54.557157 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.557335 kubelet[2692]: E0307 01:46:54.557172 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.558246 kubelet[2692]: E0307 01:46:54.557492 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.558246 kubelet[2692]: W0307 01:46:54.557506 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.558246 kubelet[2692]: E0307 01:46:54.557521 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.558246 kubelet[2692]: E0307 01:46:54.558076 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.558246 kubelet[2692]: W0307 01:46:54.558091 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.558246 kubelet[2692]: E0307 01:46:54.558107 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.559412 kubelet[2692]: E0307 01:46:54.558467 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.559412 kubelet[2692]: W0307 01:46:54.558481 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.559412 kubelet[2692]: E0307 01:46:54.558498 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.559412 kubelet[2692]: E0307 01:46:54.558892 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.559412 kubelet[2692]: W0307 01:46:54.558907 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.559412 kubelet[2692]: E0307 01:46:54.558923 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.560569 kubelet[2692]: E0307 01:46:54.560353 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.560569 kubelet[2692]: W0307 01:46:54.560381 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.560569 kubelet[2692]: E0307 01:46:54.560400 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.582151 kubelet[2692]: E0307 01:46:54.582031 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:54.582151 kubelet[2692]: W0307 01:46:54.582063 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:54.582151 kubelet[2692]: E0307 01:46:54.582093 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:54.602376 containerd[1510]: time="2026-03-07T01:46:54.602131199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:54.603127 containerd[1510]: time="2026-03-07T01:46:54.602439718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:54.603478 containerd[1510]: time="2026-03-07T01:46:54.602968218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:54.603478 containerd[1510]: time="2026-03-07T01:46:54.603255999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:54.623027 containerd[1510]: time="2026-03-07T01:46:54.622950872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2g5n5,Uid:df98ecd1-6a67-4f36-8877-1fd0255b1411,Namespace:calico-system,Attempt:0,}" Mar 7 01:46:54.646335 systemd[1]: Started cri-containerd-8aae59e2e163c46135d98393b8aa8ed2244d22fc9d07f3a223d47db414c43887.scope - libcontainer container 8aae59e2e163c46135d98393b8aa8ed2244d22fc9d07f3a223d47db414c43887. Mar 7 01:46:54.697134 containerd[1510]: time="2026-03-07T01:46:54.687989573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:46:54.697134 containerd[1510]: time="2026-03-07T01:46:54.696132300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:46:54.697134 containerd[1510]: time="2026-03-07T01:46:54.696210252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:54.698826 containerd[1510]: time="2026-03-07T01:46:54.698596861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:46:54.757926 systemd[1]: Started cri-containerd-93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838.scope - libcontainer container 93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838. Mar 7 01:46:54.799021 containerd[1510]: time="2026-03-07T01:46:54.798269667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4c6c4fdb-ghkpq,Uid:4a3dbf6a-0eed-4dd7-a5e2-861febd23bd9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8aae59e2e163c46135d98393b8aa8ed2244d22fc9d07f3a223d47db414c43887\"" Mar 7 01:46:54.806537 containerd[1510]: time="2026-03-07T01:46:54.806114798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:46:54.832408 containerd[1510]: time="2026-03-07T01:46:54.832359259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2g5n5,Uid:df98ecd1-6a67-4f36-8877-1fd0255b1411,Namespace:calico-system,Attempt:0,} returns sandbox id \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\"" Mar 7 01:46:56.026381 kubelet[2692]: E0307 01:46:56.025532 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:46:56.346648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844576471.mount: Deactivated successfully. Mar 7 01:46:58.026177 kubelet[2692]: E0307 01:46:58.026090 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:46:58.232204 containerd[1510]: time="2026-03-07T01:46:58.232107975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:58.234024 containerd[1510]: time="2026-03-07T01:46:58.233694360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:46:58.236691 containerd[1510]: time="2026-03-07T01:46:58.234989236Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:58.239055 containerd[1510]: time="2026-03-07T01:46:58.239013326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:58.240384 containerd[1510]: time="2026-03-07T01:46:58.240339982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.433457566s" Mar 7 01:46:58.240534 containerd[1510]: time="2026-03-07T01:46:58.240503922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:46:58.242734 containerd[1510]: time="2026-03-07T01:46:58.242704429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:46:58.272123 containerd[1510]: time="2026-03-07T01:46:58.272063469Z" level=info msg="CreateContainer within sandbox \"8aae59e2e163c46135d98393b8aa8ed2244d22fc9d07f3a223d47db414c43887\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:46:58.328068 containerd[1510]: time="2026-03-07T01:46:58.327403679Z" level=info msg="CreateContainer within sandbox \"8aae59e2e163c46135d98393b8aa8ed2244d22fc9d07f3a223d47db414c43887\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f387e447785917d597a8ce53e704261a7f7d148149ba7af940e1951b28f7db28\"" Mar 7 01:46:58.328708 containerd[1510]: time="2026-03-07T01:46:58.328507952Z" level=info msg="StartContainer for \"f387e447785917d597a8ce53e704261a7f7d148149ba7af940e1951b28f7db28\"" Mar 7 01:46:58.441893 systemd[1]: Started cri-containerd-f387e447785917d597a8ce53e704261a7f7d148149ba7af940e1951b28f7db28.scope - libcontainer container f387e447785917d597a8ce53e704261a7f7d148149ba7af940e1951b28f7db28. Mar 7 01:46:58.544317 containerd[1510]: time="2026-03-07T01:46:58.543985971Z" level=info msg="StartContainer for \"f387e447785917d597a8ce53e704261a7f7d148149ba7af940e1951b28f7db28\" returns successfully" Mar 7 01:46:59.211492 kubelet[2692]: I0307 01:46:59.211024 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6c4c6c4fdb-ghkpq" podStartSLOduration=1.773324527 podStartE2EDuration="5.210987799s" podCreationTimestamp="2026-03-07 01:46:54 +0000 UTC" firstStartedPulling="2026-03-07 01:46:54.804266537 +0000 UTC m=+27.117703858" lastFinishedPulling="2026-03-07 01:46:58.241929814 +0000 UTC m=+30.555367130" observedRunningTime="2026-03-07 01:46:59.210242354 +0000 UTC m=+31.523679682" watchObservedRunningTime="2026-03-07 01:46:59.210987799 +0000 UTC m=+31.524425127" Mar 7 01:46:59.244637 kubelet[2692]: E0307 01:46:59.244574 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.244637 kubelet[2692]: W0307 01:46:59.244635 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.244936 kubelet[2692]: E0307 01:46:59.244726 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.245638 kubelet[2692]: E0307 01:46:59.245614 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.245638 kubelet[2692]: W0307 01:46:59.245636 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.245805 kubelet[2692]: E0307 01:46:59.245654 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.246731 kubelet[2692]: E0307 01:46:59.246673 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.246731 kubelet[2692]: W0307 01:46:59.246694 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.246731 kubelet[2692]: E0307 01:46:59.246716 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.248858 kubelet[2692]: E0307 01:46:59.248827 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.248858 kubelet[2692]: W0307 01:46:59.248851 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.248980 kubelet[2692]: E0307 01:46:59.248871 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.249334 kubelet[2692]: E0307 01:46:59.249308 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.249334 kubelet[2692]: W0307 01:46:59.249330 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.249485 kubelet[2692]: E0307 01:46:59.249349 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.249783 kubelet[2692]: E0307 01:46:59.249757 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.249783 kubelet[2692]: W0307 01:46:59.249779 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.249925 kubelet[2692]: E0307 01:46:59.249797 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.250170 kubelet[2692]: E0307 01:46:59.250090 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.250170 kubelet[2692]: W0307 01:46:59.250114 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.250170 kubelet[2692]: E0307 01:46:59.250132 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.250533 kubelet[2692]: E0307 01:46:59.250461 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.250533 kubelet[2692]: W0307 01:46:59.250482 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.250533 kubelet[2692]: E0307 01:46:59.250499 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.254082 kubelet[2692]: E0307 01:46:59.254054 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.254082 kubelet[2692]: W0307 01:46:59.254081 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.254243 kubelet[2692]: E0307 01:46:59.254102 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.256623 kubelet[2692]: E0307 01:46:59.256498 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.256623 kubelet[2692]: W0307 01:46:59.256522 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.256623 kubelet[2692]: E0307 01:46:59.256541 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.257175 kubelet[2692]: E0307 01:46:59.256915 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.257175 kubelet[2692]: W0307 01:46:59.256931 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.257175 kubelet[2692]: E0307 01:46:59.256947 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.257347 kubelet[2692]: E0307 01:46:59.257237 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.257347 kubelet[2692]: W0307 01:46:59.257251 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.257347 kubelet[2692]: E0307 01:46:59.257267 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.258271 kubelet[2692]: E0307 01:46:59.258025 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.258271 kubelet[2692]: W0307 01:46:59.258047 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.258271 kubelet[2692]: E0307 01:46:59.258065 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.258460 kubelet[2692]: E0307 01:46:59.258382 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.258460 kubelet[2692]: W0307 01:46:59.258396 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.258460 kubelet[2692]: E0307 01:46:59.258411 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.258852 kubelet[2692]: E0307 01:46:59.258724 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.258852 kubelet[2692]: W0307 01:46:59.258738 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.258852 kubelet[2692]: E0307 01:46:59.258753 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.298358 kubelet[2692]: E0307 01:46:59.298320 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.298358 kubelet[2692]: W0307 01:46:59.298350 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.298758 kubelet[2692]: E0307 01:46:59.298379 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.298874 kubelet[2692]: E0307 01:46:59.298785 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.298874 kubelet[2692]: W0307 01:46:59.298801 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.298874 kubelet[2692]: E0307 01:46:59.298818 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.299140 kubelet[2692]: E0307 01:46:59.299121 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.299413 kubelet[2692]: W0307 01:46:59.299140 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.299413 kubelet[2692]: E0307 01:46:59.299158 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.299707 kubelet[2692]: E0307 01:46:59.299656 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.299838 kubelet[2692]: W0307 01:46:59.299813 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.300047 kubelet[2692]: E0307 01:46:59.299935 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.300706 kubelet[2692]: E0307 01:46:59.300477 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.300706 kubelet[2692]: W0307 01:46:59.300497 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.300706 kubelet[2692]: E0307 01:46:59.300528 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.301202 kubelet[2692]: E0307 01:46:59.301002 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.301202 kubelet[2692]: W0307 01:46:59.301024 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.301202 kubelet[2692]: E0307 01:46:59.301042 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.301447 kubelet[2692]: E0307 01:46:59.301426 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.301538 kubelet[2692]: W0307 01:46:59.301516 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.301725 kubelet[2692]: E0307 01:46:59.301696 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.302317 kubelet[2692]: E0307 01:46:59.302143 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.302317 kubelet[2692]: W0307 01:46:59.302162 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.302317 kubelet[2692]: E0307 01:46:59.302180 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.302558 kubelet[2692]: E0307 01:46:59.302538 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.302698 kubelet[2692]: W0307 01:46:59.302645 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.302798 kubelet[2692]: E0307 01:46:59.302776 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.303425 kubelet[2692]: E0307 01:46:59.303253 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.303425 kubelet[2692]: W0307 01:46:59.303289 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.303425 kubelet[2692]: E0307 01:46:59.303335 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.304326 kubelet[2692]: E0307 01:46:59.304153 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.304326 kubelet[2692]: W0307 01:46:59.304172 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.304326 kubelet[2692]: E0307 01:46:59.304190 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.306099 kubelet[2692]: E0307 01:46:59.305919 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.306099 kubelet[2692]: W0307 01:46:59.305939 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.306099 kubelet[2692]: E0307 01:46:59.305957 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.306395 kubelet[2692]: E0307 01:46:59.306375 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.306488 kubelet[2692]: W0307 01:46:59.306467 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.306704 kubelet[2692]: E0307 01:46:59.306569 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.307482 kubelet[2692]: E0307 01:46:59.307224 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.307482 kubelet[2692]: W0307 01:46:59.307262 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.307482 kubelet[2692]: E0307 01:46:59.307284 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.308231 kubelet[2692]: E0307 01:46:59.308211 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.309543 kubelet[2692]: W0307 01:46:59.308357 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.309543 kubelet[2692]: E0307 01:46:59.308380 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.309543 kubelet[2692]: E0307 01:46:59.308841 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.309543 kubelet[2692]: W0307 01:46:59.308859 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.309543 kubelet[2692]: E0307 01:46:59.308878 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.310437 kubelet[2692]: E0307 01:46:59.310403 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.310537 kubelet[2692]: W0307 01:46:59.310514 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.310709 kubelet[2692]: E0307 01:46:59.310646 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.311380 kubelet[2692]: E0307 01:46:59.311335 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:46:59.311517 kubelet[2692]: W0307 01:46:59.311494 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:46:59.311803 kubelet[2692]: E0307 01:46:59.311724 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:46:59.935091 containerd[1510]: time="2026-03-07T01:46:59.934997203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:59.936769 containerd[1510]: time="2026-03-07T01:46:59.936458999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:46:59.939680 containerd[1510]: time="2026-03-07T01:46:59.938262230Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:59.943885 containerd[1510]: time="2026-03-07T01:46:59.943833101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:46:59.952270 containerd[1510]: time="2026-03-07T01:46:59.952202757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.70934093s" Mar 7 01:46:59.952270 containerd[1510]: time="2026-03-07T01:46:59.952270199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:46:59.959787 containerd[1510]: time="2026-03-07T01:46:59.959651727Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:46:59.983902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977483488.mount: Deactivated successfully. Mar 7 01:46:59.989703 containerd[1510]: time="2026-03-07T01:46:59.989628723Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5\"" Mar 7 01:46:59.993718 containerd[1510]: time="2026-03-07T01:46:59.991503410Z" level=info msg="StartContainer for \"fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5\"" Mar 7 01:47:00.025260 kubelet[2692]: E0307 01:47:00.025202 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:00.058647 systemd[1]: Started cri-containerd-fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5.scope - libcontainer container fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5. Mar 7 01:47:00.133435 containerd[1510]: time="2026-03-07T01:47:00.133384396Z" level=info msg="StartContainer for \"fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5\" returns successfully" Mar 7 01:47:00.155792 systemd[1]: cri-containerd-fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5.scope: Deactivated successfully. Mar 7 01:47:00.226495 kubelet[2692]: I0307 01:47:00.225645 2692 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:47:00.289941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5-rootfs.mount: Deactivated successfully. Mar 7 01:47:00.471350 containerd[1510]: time="2026-03-07T01:47:00.465094304Z" level=info msg="shim disconnected" id=fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5 namespace=k8s.io Mar 7 01:47:00.472232 containerd[1510]: time="2026-03-07T01:47:00.471792039Z" level=warning msg="cleaning up after shim disconnected" id=fb046f535fb395008d79ee853006ea5e21ea177e3693547fbccef53a29d255a5 namespace=k8s.io Mar 7 01:47:00.472232 containerd[1510]: time="2026-03-07T01:47:00.471841448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:47:01.210222 containerd[1510]: time="2026-03-07T01:47:01.206635785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:47:02.028110 kubelet[2692]: E0307 01:47:02.025996 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:04.026498 kubelet[2692]: E0307 01:47:04.024655 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:06.037310 kubelet[2692]: E0307 01:47:06.035577 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:08.030705 kubelet[2692]: E0307 01:47:08.030600 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:08.533013 kubelet[2692]: I0307 01:47:08.532940 2692 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:47:10.026005 kubelet[2692]: E0307 01:47:10.024768 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:11.894172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458656257.mount: Deactivated successfully. Mar 7 01:47:11.979548 containerd[1510]: time="2026-03-07T01:47:11.973210583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:47:11.979548 containerd[1510]: time="2026-03-07T01:47:11.969965881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:11.982709 containerd[1510]: time="2026-03-07T01:47:11.982635980Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:11.985965 containerd[1510]: time="2026-03-07T01:47:11.985800554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:11.989207 containerd[1510]: time="2026-03-07T01:47:11.988697461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.781910523s" Mar 7 01:47:11.989207 containerd[1510]: time="2026-03-07T01:47:11.988774718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:47:11.997891 containerd[1510]: time="2026-03-07T01:47:11.997841923Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:47:12.026770 kubelet[2692]: E0307 01:47:12.025709 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:12.028518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044466662.mount: Deactivated successfully. Mar 7 01:47:12.032488 containerd[1510]: time="2026-03-07T01:47:12.032277677Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7\"" Mar 7 01:47:12.032927 containerd[1510]: time="2026-03-07T01:47:12.032894932Z" level=info msg="StartContainer for \"259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7\"" Mar 7 01:47:12.115142 systemd[1]: Started cri-containerd-259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7.scope - libcontainer container 259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7. Mar 7 01:47:12.231083 containerd[1510]: time="2026-03-07T01:47:12.229347396Z" level=info msg="StartContainer for \"259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7\" returns successfully" Mar 7 01:47:12.512909 systemd[1]: cri-containerd-259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7.scope: Deactivated successfully. Mar 7 01:47:12.559498 containerd[1510]: time="2026-03-07T01:47:12.559319555Z" level=info msg="shim disconnected" id=259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7 namespace=k8s.io Mar 7 01:47:12.559498 containerd[1510]: time="2026-03-07T01:47:12.559490707Z" level=warning msg="cleaning up after shim disconnected" id=259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7 namespace=k8s.io Mar 7 01:47:12.560210 containerd[1510]: time="2026-03-07T01:47:12.559516544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:47:12.892368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-259aebf509352c095a4d954f458f6535230fd0209febdc699d3234d9361c21b7-rootfs.mount: Deactivated successfully. Mar 7 01:47:13.258810 containerd[1510]: time="2026-03-07T01:47:13.256243318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:47:14.026695 kubelet[2692]: E0307 01:47:14.026027 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:16.025962 kubelet[2692]: E0307 01:47:16.024914 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:18.025702 kubelet[2692]: E0307 01:47:18.025552 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:18.380716 containerd[1510]: time="2026-03-07T01:47:18.380592076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:18.382324 containerd[1510]: time="2026-03-07T01:47:18.382228397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:47:18.385738 containerd[1510]: time="2026-03-07T01:47:18.383327283Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:18.389010 containerd[1510]: time="2026-03-07T01:47:18.388954514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:18.390020 containerd[1510]: time="2026-03-07T01:47:18.389968797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.133498751s" Mar 7 01:47:18.390130 containerd[1510]: time="2026-03-07T01:47:18.390054379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:47:18.436650 containerd[1510]: time="2026-03-07T01:47:18.436566430Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:47:18.467097 containerd[1510]: time="2026-03-07T01:47:18.467045334Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c\"" Mar 7 01:47:18.469139 containerd[1510]: time="2026-03-07T01:47:18.468946810Z" level=info msg="StartContainer for \"cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c\"" Mar 7 01:47:18.546102 systemd[1]: Started cri-containerd-cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c.scope - libcontainer container cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c. Mar 7 01:47:18.615006 containerd[1510]: time="2026-03-07T01:47:18.614922564Z" level=info msg="StartContainer for \"cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c\" returns successfully" Mar 7 01:47:20.026488 kubelet[2692]: E0307 01:47:20.025062 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:20.157741 systemd[1]: cri-containerd-cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c.scope: Deactivated successfully. Mar 7 01:47:20.159250 systemd[1]: cri-containerd-cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c.scope: Consumed 1.063s CPU time. Mar 7 01:47:20.196694 kubelet[2692]: I0307 01:47:20.187948 2692 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 01:47:20.216863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c-rootfs.mount: Deactivated successfully. Mar 7 01:47:20.275288 containerd[1510]: time="2026-03-07T01:47:20.274939062Z" level=info msg="shim disconnected" id=cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c namespace=k8s.io Mar 7 01:47:20.275288 containerd[1510]: time="2026-03-07T01:47:20.275287274Z" level=warning msg="cleaning up after shim disconnected" id=cbe4f9f84ca5830f4d390eec7e0d7ebdcbba2e2f36c633ebbe584f95dd11387c namespace=k8s.io Mar 7 01:47:20.275288 containerd[1510]: time="2026-03-07T01:47:20.275317573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:47:20.319707 containerd[1510]: time="2026-03-07T01:47:20.319137269Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:47:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:47:20.488407 systemd[1]: Created slice kubepods-burstable-podb4861714_eb6d_4d00_95f5_6b972ed0a0fc.slice - libcontainer container kubepods-burstable-podb4861714_eb6d_4d00_95f5_6b972ed0a0fc.slice. Mar 7 01:47:20.494303 kubelet[2692]: I0307 01:47:20.494184 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprpd\" (UniqueName: \"kubernetes.io/projected/1520bae0-6d1b-46cf-a4de-5ba1d53dfc61-kube-api-access-wprpd\") pod \"coredns-7d764666f9-jtc9r\" (UID: \"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61\") " pod="kube-system/coredns-7d764666f9-jtc9r" Mar 7 01:47:20.494303 kubelet[2692]: I0307 01:47:20.494254 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwrpp\" (UniqueName: \"kubernetes.io/projected/b4861714-eb6d-4d00-95f5-6b972ed0a0fc-kube-api-access-rwrpp\") pod \"coredns-7d764666f9-lckw9\" (UID: \"b4861714-eb6d-4d00-95f5-6b972ed0a0fc\") " pod="kube-system/coredns-7d764666f9-lckw9" Mar 7 01:47:20.494303 kubelet[2692]: I0307 01:47:20.494293 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4861714-eb6d-4d00-95f5-6b972ed0a0fc-config-volume\") pod \"coredns-7d764666f9-lckw9\" (UID: \"b4861714-eb6d-4d00-95f5-6b972ed0a0fc\") " pod="kube-system/coredns-7d764666f9-lckw9" Mar 7 01:47:20.495088 kubelet[2692]: I0307 01:47:20.494327 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1520bae0-6d1b-46cf-a4de-5ba1d53dfc61-config-volume\") pod \"coredns-7d764666f9-jtc9r\" (UID: \"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61\") " pod="kube-system/coredns-7d764666f9-jtc9r" Mar 7 01:47:20.513367 systemd[1]: Created slice kubepods-besteffort-pod1c47937c_9a38_4db0_80fd_8afe55163ff8.slice - libcontainer container kubepods-besteffort-pod1c47937c_9a38_4db0_80fd_8afe55163ff8.slice. Mar 7 01:47:20.528859 systemd[1]: Created slice kubepods-burstable-pod1520bae0_6d1b_46cf_a4de_5ba1d53dfc61.slice - libcontainer container kubepods-burstable-pod1520bae0_6d1b_46cf_a4de_5ba1d53dfc61.slice. Mar 7 01:47:20.544859 systemd[1]: Created slice kubepods-besteffort-pod501acf6a_f520_4e0a_a473_c29ca7073182.slice - libcontainer container kubepods-besteffort-pod501acf6a_f520_4e0a_a473_c29ca7073182.slice. Mar 7 01:47:20.558290 systemd[1]: Created slice kubepods-besteffort-pod1c858c32_39fe_4390_b516_9157137885fe.slice - libcontainer container kubepods-besteffort-pod1c858c32_39fe_4390_b516_9157137885fe.slice. Mar 7 01:47:20.574574 systemd[1]: Created slice kubepods-besteffort-podbd088e00_80cb_43bf_b69c_b9cab2eccdd4.slice - libcontainer container kubepods-besteffort-podbd088e00_80cb_43bf_b69c_b9cab2eccdd4.slice. Mar 7 01:47:20.590276 systemd[1]: Created slice kubepods-besteffort-pod4ac33ec3_af0f_40f4_812b_1f441606b85d.slice - libcontainer container kubepods-besteffort-pod4ac33ec3_af0f_40f4_812b_1f441606b85d.slice. Mar 7 01:47:20.596483 kubelet[2692]: I0307 01:47:20.594612 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/501acf6a-f520-4e0a-a473-c29ca7073182-tigera-ca-bundle\") pod \"calico-kube-controllers-66cf4c95d5-xdglp\" (UID: \"501acf6a-f520-4e0a-a473-c29ca7073182\") " pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" Mar 7 01:47:20.596483 kubelet[2692]: I0307 01:47:20.594684 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9ws\" (UniqueName: \"kubernetes.io/projected/501acf6a-f520-4e0a-a473-c29ca7073182-kube-api-access-9h9ws\") pod \"calico-kube-controllers-66cf4c95d5-xdglp\" (UID: \"501acf6a-f520-4e0a-a473-c29ca7073182\") " pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" Mar 7 01:47:20.596483 kubelet[2692]: I0307 01:47:20.594733 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759hg\" (UniqueName: \"kubernetes.io/projected/1c858c32-39fe-4390-b516-9157137885fe-kube-api-access-759hg\") pod \"calico-apiserver-db79945d8-2mdgg\" (UID: \"1c858c32-39fe-4390-b516-9157137885fe\") " pod="calico-system/calico-apiserver-db79945d8-2mdgg" Mar 7 01:47:20.596483 kubelet[2692]: I0307 01:47:20.594805 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c858c32-39fe-4390-b516-9157137885fe-calico-apiserver-certs\") pod \"calico-apiserver-db79945d8-2mdgg\" (UID: \"1c858c32-39fe-4390-b516-9157137885fe\") " pod="calico-system/calico-apiserver-db79945d8-2mdgg" Mar 7 01:47:20.596483 kubelet[2692]: I0307 01:47:20.594873 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-nginx-config\") pod \"whisker-697ddb4977-hhg2p\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:20.597078 kubelet[2692]: I0307 01:47:20.594921 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ac33ec3-af0f-40f4-812b-1f441606b85d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-ngb4s\" (UID: \"4ac33ec3-af0f-40f4-812b-1f441606b85d\") " pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:20.597078 kubelet[2692]: I0307 01:47:20.594958 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4ac33ec3-af0f-40f4-812b-1f441606b85d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-ngb4s\" (UID: \"4ac33ec3-af0f-40f4-812b-1f441606b85d\") " pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:20.597078 kubelet[2692]: I0307 01:47:20.595009 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-backend-key-pair\") pod \"whisker-697ddb4977-hhg2p\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:20.597078 kubelet[2692]: I0307 01:47:20.595054 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfpc\" (UniqueName: \"kubernetes.io/projected/4ac33ec3-af0f-40f4-812b-1f441606b85d-kube-api-access-rlfpc\") pod \"goldmane-9f7667bb8-ngb4s\" (UID: \"4ac33ec3-af0f-40f4-812b-1f441606b85d\") " pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:20.597078 kubelet[2692]: I0307 01:47:20.595119 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c47937c-9a38-4db0-80fd-8afe55163ff8-calico-apiserver-certs\") pod \"calico-apiserver-db79945d8-9m2h4\" (UID: \"1c47937c-9a38-4db0-80fd-8afe55163ff8\") " pod="calico-system/calico-apiserver-db79945d8-9m2h4" Mar 7 01:47:20.597409 kubelet[2692]: I0307 01:47:20.595158 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfgj\" (UniqueName: \"kubernetes.io/projected/1c47937c-9a38-4db0-80fd-8afe55163ff8-kube-api-access-thfgj\") pod \"calico-apiserver-db79945d8-9m2h4\" (UID: \"1c47937c-9a38-4db0-80fd-8afe55163ff8\") " pod="calico-system/calico-apiserver-db79945d8-9m2h4" Mar 7 01:47:20.597409 kubelet[2692]: I0307 01:47:20.595197 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x48dr\" (UniqueName: \"kubernetes.io/projected/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-kube-api-access-x48dr\") pod \"whisker-697ddb4977-hhg2p\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:20.597409 kubelet[2692]: I0307 01:47:20.595237 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac33ec3-af0f-40f4-812b-1f441606b85d-config\") pod \"goldmane-9f7667bb8-ngb4s\" (UID: \"4ac33ec3-af0f-40f4-812b-1f441606b85d\") " pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:20.597409 kubelet[2692]: I0307 01:47:20.595302 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-ca-bundle\") pod \"whisker-697ddb4977-hhg2p\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:20.815398 containerd[1510]: time="2026-03-07T01:47:20.815276581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lckw9,Uid:b4861714-eb6d-4d00-95f5-6b972ed0a0fc,Namespace:kube-system,Attempt:0,}" Mar 7 01:47:20.831627 containerd[1510]: time="2026-03-07T01:47:20.830416239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-9m2h4,Uid:1c47937c-9a38-4db0-80fd-8afe55163ff8,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:20.842324 containerd[1510]: time="2026-03-07T01:47:20.842082397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtc9r,Uid:1520bae0-6d1b-46cf-a4de-5ba1d53dfc61,Namespace:kube-system,Attempt:0,}" Mar 7 01:47:20.864800 containerd[1510]: time="2026-03-07T01:47:20.864388687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cf4c95d5-xdglp,Uid:501acf6a-f520-4e0a-a473-c29ca7073182,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:20.867421 containerd[1510]: time="2026-03-07T01:47:20.867363095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-2mdgg,Uid:1c858c32-39fe-4390-b516-9157137885fe,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:20.895986 containerd[1510]: time="2026-03-07T01:47:20.895573240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697ddb4977-hhg2p,Uid:bd088e00-80cb-43bf-b69c-b9cab2eccdd4,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:20.906318 containerd[1510]: time="2026-03-07T01:47:20.906253386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ngb4s,Uid:4ac33ec3-af0f-40f4-812b-1f441606b85d,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:21.457204 containerd[1510]: time="2026-03-07T01:47:21.457140279Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:47:21.498994 containerd[1510]: time="2026-03-07T01:47:21.498914976Z" level=error msg="Failed to destroy network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.504122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007-shm.mount: Deactivated successfully. Mar 7 01:47:21.536972 containerd[1510]: time="2026-03-07T01:47:21.499470144Z" level=error msg="Failed to destroy network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.540527 containerd[1510]: time="2026-03-07T01:47:21.539744644Z" level=error msg="Failed to destroy network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.540527 containerd[1510]: time="2026-03-07T01:47:21.540238219Z" level=error msg="encountered an error cleaning up failed sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.540527 containerd[1510]: time="2026-03-07T01:47:21.540338271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ngb4s,Uid:4ac33ec3-af0f-40f4-812b-1f441606b85d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.542591 containerd[1510]: time="2026-03-07T01:47:21.542068131Z" level=error msg="encountered an error cleaning up failed sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.542591 containerd[1510]: time="2026-03-07T01:47:21.542155090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-9m2h4,Uid:1c47937c-9a38-4db0-80fd-8afe55163ff8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.542591 containerd[1510]: time="2026-03-07T01:47:21.542283420Z" level=error msg="encountered an error cleaning up failed sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.542591 containerd[1510]: time="2026-03-07T01:47:21.542352510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697ddb4977-hhg2p,Uid:bd088e00-80cb-43bf-b69c-b9cab2eccdd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.543303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857-shm.mount: Deactivated successfully. Mar 7 01:47:21.554906 containerd[1510]: time="2026-03-07T01:47:21.554449882Z" level=error msg="Failed to destroy network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.559181 containerd[1510]: time="2026-03-07T01:47:21.557799940Z" level=error msg="encountered an error cleaning up failed sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.559181 containerd[1510]: time="2026-03-07T01:47:21.557895593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lckw9,Uid:b4861714-eb6d-4d00-95f5-6b972ed0a0fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.559181 containerd[1510]: time="2026-03-07T01:47:21.558393498Z" level=error msg="Failed to destroy network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.561077 containerd[1510]: time="2026-03-07T01:47:21.559202975Z" level=error msg="Failed to destroy network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.561077 containerd[1510]: time="2026-03-07T01:47:21.559615019Z" level=error msg="encountered an error cleaning up failed sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.561077 containerd[1510]: time="2026-03-07T01:47:21.559703514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cf4c95d5-xdglp,Uid:501acf6a-f520-4e0a-a473-c29ca7073182,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.561077 containerd[1510]: time="2026-03-07T01:47:21.560285710Z" level=error msg="encountered an error cleaning up failed sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.561077 containerd[1510]: time="2026-03-07T01:47:21.560353873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtc9r,Uid:1520bae0-6d1b-46cf-a4de-5ba1d53dfc61,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.567519 containerd[1510]: time="2026-03-07T01:47:21.567463854Z" level=error msg="Failed to destroy network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.568318 containerd[1510]: time="2026-03-07T01:47:21.568162037Z" level=error msg="encountered an error cleaning up failed sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.568497 containerd[1510]: time="2026-03-07T01:47:21.568457980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-2mdgg,Uid:1c858c32-39fe-4390-b516-9157137885fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.573690 kubelet[2692]: E0307 01:47:21.571492 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.576527 kubelet[2692]: E0307 01:47:21.576448 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jtc9r" Mar 7 01:47:21.576842 kubelet[2692]: E0307 01:47:21.576738 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jtc9r" Mar 7 01:47:21.577239 kubelet[2692]: E0307 01:47:21.577164 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-jtc9r_kube-system(1520bae0-6d1b-46cf-a4de-5ba1d53dfc61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-jtc9r_kube-system(1520bae0-6d1b-46cf-a4de-5ba1d53dfc61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-jtc9r" podUID="1520bae0-6d1b-46cf-a4de-5ba1d53dfc61" Mar 7 01:47:21.591107 kubelet[2692]: E0307 01:47:21.570234 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.592167 kubelet[2692]: E0307 01:47:21.591758 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:21.592167 kubelet[2692]: E0307 01:47:21.591807 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ngb4s" Mar 7 01:47:21.592167 kubelet[2692]: E0307 01:47:21.591911 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-ngb4s_calico-system(4ac33ec3-af0f-40f4-812b-1f441606b85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-ngb4s_calico-system(4ac33ec3-af0f-40f4-812b-1f441606b85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-ngb4s" podUID="4ac33ec3-af0f-40f4-812b-1f441606b85d" Mar 7 01:47:21.595907 kubelet[2692]: E0307 01:47:21.574800 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.598043 kubelet[2692]: E0307 01:47:21.598005 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" Mar 7 01:47:21.598228 kubelet[2692]: E0307 01:47:21.598195 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" Mar 7 01:47:21.598586 kubelet[2692]: E0307 01:47:21.598408 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66cf4c95d5-xdglp_calico-system(501acf6a-f520-4e0a-a473-c29ca7073182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66cf4c95d5-xdglp_calico-system(501acf6a-f520-4e0a-a473-c29ca7073182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" podUID="501acf6a-f520-4e0a-a473-c29ca7073182" Mar 7 01:47:21.599013 kubelet[2692]: E0307 01:47:21.574722 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.599484 kubelet[2692]: E0307 01:47:21.599322 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:21.600068 kubelet[2692]: E0307 01:47:21.574759 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.600068 kubelet[2692]: E0307 01:47:21.599794 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lckw9" Mar 7 01:47:21.601390 kubelet[2692]: E0307 01:47:21.600495 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lckw9" Mar 7 01:47:21.601390 kubelet[2692]: E0307 01:47:21.600706 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-lckw9_kube-system(b4861714-eb6d-4d00-95f5-6b972ed0a0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-lckw9_kube-system(b4861714-eb6d-4d00-95f5-6b972ed0a0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-lckw9" podUID="b4861714-eb6d-4d00-95f5-6b972ed0a0fc" Mar 7 01:47:21.601390 kubelet[2692]: E0307 01:47:21.574668 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.602247 kubelet[2692]: E0307 01:47:21.601708 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-db79945d8-9m2h4" Mar 7 01:47:21.602247 kubelet[2692]: E0307 01:47:21.601796 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-db79945d8-9m2h4" Mar 7 01:47:21.602247 kubelet[2692]: E0307 01:47:21.602005 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db79945d8-9m2h4_calico-system(1c47937c-9a38-4db0-80fd-8afe55163ff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db79945d8-9m2h4_calico-system(1c47937c-9a38-4db0-80fd-8afe55163ff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-db79945d8-9m2h4" podUID="1c47937c-9a38-4db0-80fd-8afe55163ff8" Mar 7 01:47:21.603689 kubelet[2692]: E0307 01:47:21.602025 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-697ddb4977-hhg2p" Mar 7 01:47:21.603689 kubelet[2692]: E0307 01:47:21.602176 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-697ddb4977-hhg2p_calico-system(bd088e00-80cb-43bf-b69c-b9cab2eccdd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-697ddb4977-hhg2p_calico-system(bd088e00-80cb-43bf-b69c-b9cab2eccdd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-697ddb4977-hhg2p" podUID="bd088e00-80cb-43bf-b69c-b9cab2eccdd4" Mar 7 01:47:21.603689 kubelet[2692]: E0307 01:47:21.574964 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:21.604376 kubelet[2692]: E0307 01:47:21.602493 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-db79945d8-2mdgg" Mar 7 01:47:21.604376 kubelet[2692]: E0307 01:47:21.602552 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-db79945d8-2mdgg" Mar 7 01:47:21.604376 kubelet[2692]: E0307 01:47:21.603286 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db79945d8-2mdgg_calico-system(1c858c32-39fe-4390-b516-9157137885fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db79945d8-2mdgg_calico-system(1c858c32-39fe-4390-b516-9157137885fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-db79945d8-2mdgg" podUID="1c858c32-39fe-4390-b516-9157137885fe" Mar 7 01:47:21.637950 containerd[1510]: time="2026-03-07T01:47:21.637881417Z" level=info msg="CreateContainer within sandbox \"93a98ed96ac3680424c852a6603d90420cf6abbf731478b9753ef5396ef9d838\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b483ab5d353dcb5450e64bdf51c57f4fc2940a6c60bc5258658c9c0f242a742\"" Mar 7 01:47:21.640688 containerd[1510]: time="2026-03-07T01:47:21.639298515Z" level=info msg="StartContainer for \"0b483ab5d353dcb5450e64bdf51c57f4fc2940a6c60bc5258658c9c0f242a742\"" Mar 7 01:47:21.714917 systemd[1]: Started cri-containerd-0b483ab5d353dcb5450e64bdf51c57f4fc2940a6c60bc5258658c9c0f242a742.scope - libcontainer container 0b483ab5d353dcb5450e64bdf51c57f4fc2940a6c60bc5258658c9c0f242a742. Mar 7 01:47:21.782253 containerd[1510]: time="2026-03-07T01:47:21.782105082Z" level=info msg="StartContainer for \"0b483ab5d353dcb5450e64bdf51c57f4fc2940a6c60bc5258658c9c0f242a742\" returns successfully" Mar 7 01:47:22.036557 systemd[1]: Created slice kubepods-besteffort-podbbd29171_97d8_4573_957e_b074ea425f68.slice - libcontainer container kubepods-besteffort-podbbd29171_97d8_4573_957e_b074ea425f68.slice. Mar 7 01:47:22.044691 containerd[1510]: time="2026-03-07T01:47:22.044183888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7n5xl,Uid:bbd29171-97d8-4573-957e-b074ea425f68,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:22.166470 containerd[1510]: time="2026-03-07T01:47:22.166289762Z" level=error msg="Failed to destroy network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:22.167478 containerd[1510]: time="2026-03-07T01:47:22.167180434Z" level=error msg="encountered an error cleaning up failed sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:22.167478 containerd[1510]: time="2026-03-07T01:47:22.167268361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7n5xl,Uid:bbd29171-97d8-4573-957e-b074ea425f68,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:22.168107 kubelet[2692]: E0307 01:47:22.168051 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:47:22.168367 kubelet[2692]: E0307 01:47:22.168325 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:47:22.168704 kubelet[2692]: E0307 01:47:22.168503 2692 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7n5xl" Mar 7 01:47:22.168902 kubelet[2692]: E0307 01:47:22.168649 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7n5xl_calico-system(bbd29171-97d8-4573-957e-b074ea425f68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7n5xl_calico-system(bbd29171-97d8-4573-957e-b074ea425f68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7n5xl" podUID="bbd29171-97d8-4573-957e-b074ea425f68" Mar 7 01:47:22.216982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7-shm.mount: Deactivated successfully. Mar 7 01:47:22.217165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391-shm.mount: Deactivated successfully. Mar 7 01:47:22.218584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b-shm.mount: Deactivated successfully. Mar 7 01:47:22.218904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad-shm.mount: Deactivated successfully. Mar 7 01:47:22.219046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1-shm.mount: Deactivated successfully. Mar 7 01:47:22.367173 kubelet[2692]: I0307 01:47:22.367122 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:22.376318 kubelet[2692]: I0307 01:47:22.376278 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:22.378408 kubelet[2692]: I0307 01:47:22.378380 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:22.380275 kubelet[2692]: I0307 01:47:22.380247 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:22.382475 kubelet[2692]: I0307 01:47:22.381812 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:22.444729 kubelet[2692]: I0307 01:47:22.442544 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:22.502768 containerd[1510]: time="2026-03-07T01:47:22.502589469Z" level=info msg="StopPodSandbox for \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\"" Mar 7 01:47:22.515520 containerd[1510]: time="2026-03-07T01:47:22.514412309Z" level=info msg="StopPodSandbox for \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\"" Mar 7 01:47:22.515520 containerd[1510]: time="2026-03-07T01:47:22.515046319Z" level=info msg="StopPodSandbox for \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\"" Mar 7 01:47:22.515520 containerd[1510]: time="2026-03-07T01:47:22.515355402Z" level=info msg="Ensure that sandbox 033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007 in task-service has been cleanup successfully" Mar 7 01:47:22.527708 containerd[1510]: time="2026-03-07T01:47:22.525299715Z" level=info msg="StopPodSandbox for \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\"" Mar 7 01:47:22.527708 containerd[1510]: time="2026-03-07T01:47:22.525652681Z" level=info msg="Ensure that sandbox 078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae in task-service has been cleanup successfully" Mar 7 01:47:22.531753 containerd[1510]: time="2026-03-07T01:47:22.531646819Z" level=info msg="StopPodSandbox for \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\"" Mar 7 01:47:22.533090 containerd[1510]: time="2026-03-07T01:47:22.488652653Z" level=info msg="StopPodSandbox for \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\"" Mar 7 01:47:22.535572 containerd[1510]: time="2026-03-07T01:47:22.535200431Z" level=info msg="Ensure that sandbox e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7 in task-service has been cleanup successfully" Mar 7 01:47:22.536526 containerd[1510]: time="2026-03-07T01:47:22.536364005Z" level=info msg="Ensure that sandbox 3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b in task-service has been cleanup successfully" Mar 7 01:47:22.551310 containerd[1510]: time="2026-03-07T01:47:22.550736882Z" level=info msg="Ensure that sandbox 7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391 in task-service has been cleanup successfully" Mar 7 01:47:22.554638 kubelet[2692]: I0307 01:47:22.554308 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-2g5n5" podStartSLOduration=1.976632553 podStartE2EDuration="28.554129322s" podCreationTimestamp="2026-03-07 01:46:54 +0000 UTC" firstStartedPulling="2026-03-07 01:46:54.835123514 +0000 UTC m=+27.148560834" lastFinishedPulling="2026-03-07 01:47:21.412620289 +0000 UTC m=+53.726057603" observedRunningTime="2026-03-07 01:47:22.485116676 +0000 UTC m=+54.798554013" watchObservedRunningTime="2026-03-07 01:47:22.554129322 +0000 UTC m=+54.867566645" Mar 7 01:47:22.557878 containerd[1510]: time="2026-03-07T01:47:22.557824403Z" level=info msg="Ensure that sandbox 9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad in task-service has been cleanup successfully" Mar 7 01:47:22.573081 kubelet[2692]: I0307 01:47:22.573036 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:22.587013 containerd[1510]: time="2026-03-07T01:47:22.586951730Z" level=info msg="StopPodSandbox for \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\"" Mar 7 01:47:22.587421 containerd[1510]: time="2026-03-07T01:47:22.587291509Z" level=info msg="Ensure that sandbox 8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857 in task-service has been cleanup successfully" Mar 7 01:47:22.597465 kubelet[2692]: I0307 01:47:22.596602 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:22.611163 containerd[1510]: time="2026-03-07T01:47:22.610332439Z" level=info msg="StopPodSandbox for \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\"" Mar 7 01:47:22.614917 containerd[1510]: time="2026-03-07T01:47:22.614740450Z" level=info msg="Ensure that sandbox 4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1 in task-service has been cleanup successfully" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.100 [INFO][3833] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.107 [INFO][3833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" iface="eth0" netns="/var/run/netns/cni-4858fbd8-1295-2f6b-d621-58a2705395de" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.116 [INFO][3833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" iface="eth0" netns="/var/run/netns/cni-4858fbd8-1295-2f6b-d621-58a2705395de" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.119 [INFO][3833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" iface="eth0" netns="/var/run/netns/cni-4858fbd8-1295-2f6b-d621-58a2705395de" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.119 [INFO][3833] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.119 [INFO][3833] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.528 [INFO][3929] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.536 [INFO][3929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.538 [INFO][3929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.573 [WARNING][3929] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.573 [INFO][3929] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.577 [INFO][3929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.609124 containerd[1510]: 2026-03-07 01:47:23.589 [INFO][3833] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:23.619479 containerd[1510]: time="2026-03-07T01:47:23.609711221Z" level=info msg="TearDown network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" successfully" Mar 7 01:47:23.619479 containerd[1510]: time="2026-03-07T01:47:23.609759484Z" level=info msg="StopPodSandbox for \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" returns successfully" Mar 7 01:47:23.617607 systemd[1]: run-netns-cni\x2d4858fbd8\x2d1295\x2d2f6b\x2dd621\x2d58a2705395de.mount: Deactivated successfully. Mar 7 01:47:23.623518 containerd[1510]: time="2026-03-07T01:47:23.622931080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-2mdgg,Uid:1c858c32-39fe-4390-b516-9157137885fe,Namespace:calico-system,Attempt:1,}" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.023 [INFO][3825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.023 [INFO][3825] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" iface="eth0" netns="/var/run/netns/cni-f00ff984-2071-84a8-b657-d1ac806e0a31" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.024 [INFO][3825] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" iface="eth0" netns="/var/run/netns/cni-f00ff984-2071-84a8-b657-d1ac806e0a31" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.028 [INFO][3825] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" iface="eth0" netns="/var/run/netns/cni-f00ff984-2071-84a8-b657-d1ac806e0a31" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.028 [INFO][3825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.028 [INFO][3825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.529 [INFO][3915] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.536 [INFO][3915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.577 [INFO][3915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.602 [WARNING][3915] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.602 [INFO][3915] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.616 [INFO][3915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.632706 containerd[1510]: 2026-03-07 01:47:23.624 [INFO][3825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:23.632706 containerd[1510]: time="2026-03-07T01:47:23.631074857Z" level=info msg="TearDown network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" successfully" Mar 7 01:47:23.632706 containerd[1510]: time="2026-03-07T01:47:23.631106156Z" level=info msg="StopPodSandbox for \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" returns successfully" Mar 7 01:47:23.637363 systemd[1]: run-netns-cni\x2df00ff984\x2d2071\x2d84a8\x2db657\x2dd1ac806e0a31.mount: Deactivated successfully. Mar 7 01:47:23.645213 containerd[1510]: time="2026-03-07T01:47:23.643177769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7n5xl,Uid:bbd29171-97d8-4573-957e-b074ea425f68,Namespace:calico-system,Attempt:1,}" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.172 [INFO][3868] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.172 [INFO][3868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" iface="eth0" netns="/var/run/netns/cni-c5b47233-4aaa-b5d7-5b9a-6b1c2654e98e" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.173 [INFO][3868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" iface="eth0" netns="/var/run/netns/cni-c5b47233-4aaa-b5d7-5b9a-6b1c2654e98e" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.175 [INFO][3868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" iface="eth0" netns="/var/run/netns/cni-c5b47233-4aaa-b5d7-5b9a-6b1c2654e98e" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.175 [INFO][3868] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.175 [INFO][3868] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.529 [INFO][3937] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.536 [INFO][3937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.613 [INFO][3937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.664 [WARNING][3937] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.665 [INFO][3937] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.673 [INFO][3937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.709802 containerd[1510]: 2026-03-07 01:47:23.699 [INFO][3868] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:23.718744 containerd[1510]: time="2026-03-07T01:47:23.716064770Z" level=info msg="TearDown network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" successfully" Mar 7 01:47:23.718744 containerd[1510]: time="2026-03-07T01:47:23.716141260Z" level=info msg="StopPodSandbox for \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" returns successfully" Mar 7 01:47:23.726585 containerd[1510]: time="2026-03-07T01:47:23.725734458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lckw9,Uid:b4861714-eb6d-4d00-95f5-6b972ed0a0fc,Namespace:kube-system,Attempt:1,}" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.172 [INFO][3851] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.172 [INFO][3851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" iface="eth0" netns="/var/run/netns/cni-883b97cb-fdfe-88e0-5100-27f37168fdd5" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.173 [INFO][3851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" iface="eth0" netns="/var/run/netns/cni-883b97cb-fdfe-88e0-5100-27f37168fdd5" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.176 [INFO][3851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" iface="eth0" netns="/var/run/netns/cni-883b97cb-fdfe-88e0-5100-27f37168fdd5" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.176 [INFO][3851] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.176 [INFO][3851] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.530 [INFO][3938] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.538 [INFO][3938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.675 [INFO][3938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.715 [WARNING][3938] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.715 [INFO][3938] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.729 [INFO][3938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.776035 containerd[1510]: 2026-03-07 01:47:23.748 [INFO][3851] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:23.777851 containerd[1510]: time="2026-03-07T01:47:23.777163285Z" level=info msg="TearDown network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" successfully" Mar 7 01:47:23.780059 containerd[1510]: time="2026-03-07T01:47:23.780027484Z" level=info msg="StopPodSandbox for \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" returns successfully" Mar 7 01:47:23.786248 containerd[1510]: time="2026-03-07T01:47:23.786197653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtc9r,Uid:1520bae0-6d1b-46cf-a4de-5ba1d53dfc61,Namespace:kube-system,Attempt:1,}" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.011 [INFO][3828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.011 [INFO][3828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" iface="eth0" netns="/var/run/netns/cni-9d62f0e4-866c-8c2c-308c-ecde75e2f61e" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.014 [INFO][3828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" iface="eth0" netns="/var/run/netns/cni-9d62f0e4-866c-8c2c-308c-ecde75e2f61e" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.025 [INFO][3828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" iface="eth0" netns="/var/run/netns/cni-9d62f0e4-866c-8c2c-308c-ecde75e2f61e" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.025 [INFO][3828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.025 [INFO][3828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.530 [INFO][3913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.540 [INFO][3913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.729 [INFO][3913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.767 [WARNING][3913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.767 [INFO][3913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.777 [INFO][3913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.814832 containerd[1510]: 2026-03-07 01:47:23.787 [INFO][3828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:23.824505 containerd[1510]: time="2026-03-07T01:47:23.816888397Z" level=info msg="TearDown network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" successfully" Mar 7 01:47:23.824505 containerd[1510]: time="2026-03-07T01:47:23.817247660Z" level=info msg="StopPodSandbox for \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" returns successfully" Mar 7 01:47:23.825549 containerd[1510]: time="2026-03-07T01:47:23.825506207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cf4c95d5-xdglp,Uid:501acf6a-f520-4e0a-a473-c29ca7073182,Namespace:calico-system,Attempt:1,}" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.215 [INFO][3879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.220 [INFO][3879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" iface="eth0" netns="/var/run/netns/cni-0370564a-ec88-a6fd-d954-72e480ae5153" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.221 [INFO][3879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" iface="eth0" netns="/var/run/netns/cni-0370564a-ec88-a6fd-d954-72e480ae5153" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.230 [INFO][3879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" iface="eth0" netns="/var/run/netns/cni-0370564a-ec88-a6fd-d954-72e480ae5153" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.230 [INFO][3879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.230 [INFO][3879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.531 [INFO][3951] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.541 [INFO][3951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.777 [INFO][3951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.814 [WARNING][3951] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.814 [INFO][3951] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.830 [INFO][3951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.851878 containerd[1510]: 2026-03-07 01:47:23.839 [INFO][3879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:23.853214 containerd[1510]: time="2026-03-07T01:47:23.853052313Z" level=info msg="TearDown network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" successfully" Mar 7 01:47:23.853214 containerd[1510]: time="2026-03-07T01:47:23.853097890Z" level=info msg="StopPodSandbox for \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" returns successfully" Mar 7 01:47:23.859743 containerd[1510]: time="2026-03-07T01:47:23.859623526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-9m2h4,Uid:1c47937c-9a38-4db0-80fd-8afe55163ff8,Namespace:calico-system,Attempt:1,}" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.168 [INFO][3806] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.169 [INFO][3806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" iface="eth0" netns="/var/run/netns/cni-955545f3-2374-1bc2-287d-9d221d1f6736" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.169 [INFO][3806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" iface="eth0" netns="/var/run/netns/cni-955545f3-2374-1bc2-287d-9d221d1f6736" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.170 [INFO][3806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" iface="eth0" netns="/var/run/netns/cni-955545f3-2374-1bc2-287d-9d221d1f6736" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.170 [INFO][3806] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.170 [INFO][3806] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.559 [INFO][3936] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.569 [INFO][3936] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.831 [INFO][3936] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.865 [WARNING][3936] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.865 [INFO][3936] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.877 [INFO][3936] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.910927 containerd[1510]: 2026-03-07 01:47:23.882 [INFO][3806] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:23.912395 containerd[1510]: time="2026-03-07T01:47:23.912222963Z" level=info msg="TearDown network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" successfully" Mar 7 01:47:23.913626 containerd[1510]: time="2026-03-07T01:47:23.912401354Z" level=info msg="StopPodSandbox for \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" returns successfully" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.218 [INFO][3839] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.222 [INFO][3839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" iface="eth0" netns="/var/run/netns/cni-d2cd4309-a9b7-ea3f-2aa4-2eca2ede6997" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.224 [INFO][3839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" iface="eth0" netns="/var/run/netns/cni-d2cd4309-a9b7-ea3f-2aa4-2eca2ede6997" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.224 [INFO][3839] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" iface="eth0" netns="/var/run/netns/cni-d2cd4309-a9b7-ea3f-2aa4-2eca2ede6997" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.224 [INFO][3839] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.224 [INFO][3839] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.589 [INFO][3952] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.590 [INFO][3952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.877 [INFO][3952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.940 [WARNING][3952] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.941 [INFO][3952] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.950 [INFO][3952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:23.979243 containerd[1510]: 2026-03-07 01:47:23.967 [INFO][3839] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:23.981709 containerd[1510]: time="2026-03-07T01:47:23.981097343Z" level=info msg="TearDown network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" successfully" Mar 7 01:47:23.981709 containerd[1510]: time="2026-03-07T01:47:23.981179696Z" level=info msg="StopPodSandbox for \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" returns successfully" Mar 7 01:47:23.990739 containerd[1510]: time="2026-03-07T01:47:23.990228226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ngb4s,Uid:4ac33ec3-af0f-40f4-812b-1f441606b85d,Namespace:calico-system,Attempt:1,}" Mar 7 01:47:24.064944 kubelet[2692]: I0307 01:47:24.064879 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-kube-api-access-x48dr\" (UniqueName: \"kubernetes.io/projected/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-kube-api-access-x48dr\") pod \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " Mar 7 01:47:24.073198 kubelet[2692]: I0307 01:47:24.072975 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-ca-bundle\") pod \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " Mar 7 01:47:24.073198 kubelet[2692]: I0307 01:47:24.073073 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-backend-key-pair\") pod \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " Mar 7 01:47:24.073423 kubelet[2692]: I0307 01:47:24.073349 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-nginx-config\") pod \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\" (UID: \"bd088e00-80cb-43bf-b69c-b9cab2eccdd4\") " Mar 7 01:47:24.105515 kubelet[2692]: I0307 01:47:24.104345 2692 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-ca-bundle" pod "bd088e00-80cb-43bf-b69c-b9cab2eccdd4" (UID: "bd088e00-80cb-43bf-b69c-b9cab2eccdd4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:47:24.126232 kubelet[2692]: I0307 01:47:24.125765 2692 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-nginx-config" pod "bd088e00-80cb-43bf-b69c-b9cab2eccdd4" (UID: "bd088e00-80cb-43bf-b69c-b9cab2eccdd4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:47:24.133138 kubelet[2692]: I0307 01:47:24.133078 2692 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-backend-key-pair" pod "bd088e00-80cb-43bf-b69c-b9cab2eccdd4" (UID: "bd088e00-80cb-43bf-b69c-b9cab2eccdd4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:47:24.139558 kubelet[2692]: I0307 01:47:24.139437 2692 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-kube-api-access-x48dr" pod "bd088e00-80cb-43bf-b69c-b9cab2eccdd4" (UID: "bd088e00-80cb-43bf-b69c-b9cab2eccdd4"). InnerVolumeSpecName "kube-api-access-x48dr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:47:24.174685 kubelet[2692]: I0307 01:47:24.174401 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-ca-bundle\") on node \"srv-wuc9t.gb1.brightbox.com\" DevicePath \"\"" Mar 7 01:47:24.175701 kubelet[2692]: I0307 01:47:24.174913 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-whisker-backend-key-pair\") on node \"srv-wuc9t.gb1.brightbox.com\" DevicePath \"\"" Mar 7 01:47:24.175888 kubelet[2692]: I0307 01:47:24.175831 2692 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-nginx-config\") on node \"srv-wuc9t.gb1.brightbox.com\" DevicePath \"\"" Mar 7 01:47:24.175888 kubelet[2692]: I0307 01:47:24.175861 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x48dr\" (UniqueName: \"kubernetes.io/projected/bd088e00-80cb-43bf-b69c-b9cab2eccdd4-kube-api-access-x48dr\") on node \"srv-wuc9t.gb1.brightbox.com\" DevicePath \"\"" Mar 7 01:47:24.632850 systemd[1]: run-netns-cni\x2d955545f3\x2d2374\x2d1bc2\x2d287d\x2d9d221d1f6736.mount: Deactivated successfully. Mar 7 01:47:24.633052 systemd[1]: run-netns-cni\x2dd2cd4309\x2da9b7\x2dea3f\x2d2aa4\x2d2eca2ede6997.mount: Deactivated successfully. Mar 7 01:47:24.633545 systemd[1]: run-netns-cni\x2d9d62f0e4\x2d866c\x2d8c2c\x2d308c\x2decde75e2f61e.mount: Deactivated successfully. Mar 7 01:47:24.633694 systemd[1]: run-netns-cni\x2d883b97cb\x2dfdfe\x2d88e0\x2d5100\x2d27f37168fdd5.mount: Deactivated successfully. Mar 7 01:47:24.634746 systemd[1]: run-netns-cni\x2d0370564a\x2dec88\x2da6fd\x2dd954\x2d72e480ae5153.mount: Deactivated successfully. Mar 7 01:47:24.634885 systemd[1]: run-netns-cni\x2dc5b47233\x2d4aaa\x2db5d7\x2d5b9a\x2d6b1c2654e98e.mount: Deactivated successfully. Mar 7 01:47:24.635007 systemd[1]: var-lib-kubelet-pods-bd088e00\x2d80cb\x2d43bf\x2db69c\x2db9cab2eccdd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx48dr.mount: Deactivated successfully. Mar 7 01:47:24.635140 systemd[1]: var-lib-kubelet-pods-bd088e00\x2d80cb\x2d43bf\x2db69c\x2db9cab2eccdd4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:47:24.691437 systemd[1]: Removed slice kubepods-besteffort-podbd088e00_80cb_43bf_b69c_b9cab2eccdd4.slice - libcontainer container kubepods-besteffort-podbd088e00_80cb_43bf_b69c_b9cab2eccdd4.slice. Mar 7 01:47:24.787759 systemd-networkd[1426]: cali68a7e93ea3c: Link UP Mar 7 01:47:24.790359 systemd-networkd[1426]: cali68a7e93ea3c: Gained carrier Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:23.973 [ERROR][3983] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.038 [INFO][3983] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0 csi-node-driver- calico-system bbd29171-97d8-4573-957e-b074ea425f68 954 0 2026-03-07 01:46:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com csi-node-driver-7n5xl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali68a7e93ea3c [] [] }} ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.038 [INFO][3983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.376 [INFO][4095] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" HandleID="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.480 [INFO][4095] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" HandleID="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cc450), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"csi-node-driver-7n5xl", "timestamp":"2026-03-07 01:47:24.376077328 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001891e0)} Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.481 [INFO][4095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.484 [INFO][4095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.485 [INFO][4095] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.498 [INFO][4095] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.523 [INFO][4095] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.544 [INFO][4095] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.569 [INFO][4095] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.595 [INFO][4095] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.595 [INFO][4095] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.606 [INFO][4095] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3 Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.674 [INFO][4095] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.708 [INFO][4095] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.708 [INFO][4095] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.708 [INFO][4095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:25.024068 containerd[1510]: 2026-03-07 01:47:24.712 [INFO][4095] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" HandleID="k8s-pod-network.d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:24.730 [INFO][3983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbd29171-97d8-4573-957e-b074ea425f68", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-7n5xl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68a7e93ea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:24.733 [INFO][3983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.193/32] ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:24.734 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68a7e93ea3c ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:24.793 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:24.826 [INFO][3983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbd29171-97d8-4573-957e-b074ea425f68", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3", Pod:"csi-node-driver-7n5xl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68a7e93ea3c", MAC:"b2:fa:a8:92:dc:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.028265 containerd[1510]: 2026-03-07 01:47:25.016 [INFO][3983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3" Namespace="calico-system" Pod="csi-node-driver-7n5xl" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:25.123929 systemd[1]: Created slice kubepods-besteffort-podf9f61d7c_7313_446e_a9f7_2139c66b7a47.slice - libcontainer container kubepods-besteffort-podf9f61d7c_7313_446e_a9f7_2139c66b7a47.slice. Mar 7 01:47:25.131948 containerd[1510]: time="2026-03-07T01:47:25.131106922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:25.132879 containerd[1510]: time="2026-03-07T01:47:25.131868852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:25.133567 containerd[1510]: time="2026-03-07T01:47:25.132794192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.133567 containerd[1510]: time="2026-03-07T01:47:25.133284404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.175819 systemd-networkd[1426]: cali3f11790c577: Link UP Mar 7 01:47:25.176208 systemd-networkd[1426]: cali3f11790c577: Gained carrier Mar 7 01:47:25.229940 systemd[1]: run-containerd-runc-k8s.io-d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3-runc.EU9jhJ.mount: Deactivated successfully. Mar 7 01:47:25.241000 systemd[1]: Started cri-containerd-d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3.scope - libcontainer container d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3. Mar 7 01:47:25.287688 kubelet[2692]: I0307 01:47:25.287301 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnjc\" (UniqueName: \"kubernetes.io/projected/f9f61d7c-7313-446e-a9f7-2139c66b7a47-kube-api-access-mcnjc\") pod \"whisker-85c7754466-f58ng\" (UID: \"f9f61d7c-7313-446e-a9f7-2139c66b7a47\") " pod="calico-system/whisker-85c7754466-f58ng" Mar 7 01:47:25.287688 kubelet[2692]: I0307 01:47:25.287372 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9f61d7c-7313-446e-a9f7-2139c66b7a47-whisker-backend-key-pair\") pod \"whisker-85c7754466-f58ng\" (UID: \"f9f61d7c-7313-446e-a9f7-2139c66b7a47\") " pod="calico-system/whisker-85c7754466-f58ng" Mar 7 01:47:25.287688 kubelet[2692]: I0307 01:47:25.287421 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f61d7c-7313-446e-a9f7-2139c66b7a47-whisker-ca-bundle\") pod \"whisker-85c7754466-f58ng\" (UID: \"f9f61d7c-7313-446e-a9f7-2139c66b7a47\") " pod="calico-system/whisker-85c7754466-f58ng" Mar 7 01:47:25.303718 kubelet[2692]: I0307 01:47:25.303354 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f9f61d7c-7313-446e-a9f7-2139c66b7a47-nginx-config\") pod \"whisker-85c7754466-f58ng\" (UID: \"f9f61d7c-7313-446e-a9f7-2139c66b7a47\") " pod="calico-system/whisker-85c7754466-f58ng" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:23.992 [ERROR][3995] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.170 [INFO][3995] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0 coredns-7d764666f9- kube-system b4861714-eb6d-4d00-95f5-6b972ed0a0fc 959 0 2026-03-07 01:46:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com coredns-7d764666f9-lckw9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f11790c577 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.173 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.509 [INFO][4117] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" HandleID="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.599 [INFO][4117] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" HandleID="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000273a70), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"coredns-7d764666f9-lckw9", "timestamp":"2026-03-07 01:47:24.509121114 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ab600)} Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.600 [INFO][4117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.708 [INFO][4117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.708 [INFO][4117] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.749 [INFO][4117] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.831 [INFO][4117] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.949 [INFO][4117] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:24.999 [INFO][4117] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.032 [INFO][4117] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.033 [INFO][4117] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.051 [INFO][4117] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397 Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.101 [INFO][4117] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.149 [INFO][4117] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.149 [INFO][4117] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.150 [INFO][4117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:25.305377 containerd[1510]: 2026-03-07 01:47:25.150 [INFO][4117] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" HandleID="k8s-pod-network.40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.308864 containerd[1510]: 2026-03-07 01:47:25.164 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b4861714-eb6d-4d00-95f5-6b972ed0a0fc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-lckw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f11790c577", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.308864 containerd[1510]: 2026-03-07 01:47:25.166 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.194/32] ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.308864 containerd[1510]: 2026-03-07 01:47:25.166 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f11790c577 ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.308864 containerd[1510]: 2026-03-07 01:47:25.177 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.308864 containerd[1510]: 2026-03-07 01:47:25.179 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b4861714-eb6d-4d00-95f5-6b972ed0a0fc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397", Pod:"coredns-7d764666f9-lckw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f11790c577", MAC:"12:92:26:0a:e8:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.310634 containerd[1510]: 2026-03-07 01:47:25.299 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397" Namespace="kube-system" Pod="coredns-7d764666f9-lckw9" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:25.360746 containerd[1510]: time="2026-03-07T01:47:25.360116179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:25.360746 containerd[1510]: time="2026-03-07T01:47:25.360261670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:25.361986 containerd[1510]: time="2026-03-07T01:47:25.361913274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.363348 containerd[1510]: time="2026-03-07T01:47:25.363055484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.440054 systemd[1]: Started cri-containerd-40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397.scope - libcontainer container 40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397. Mar 7 01:47:25.583257 containerd[1510]: time="2026-03-07T01:47:25.582940301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lckw9,Uid:b4861714-eb6d-4d00-95f5-6b972ed0a0fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397\"" Mar 7 01:47:25.601329 containerd[1510]: time="2026-03-07T01:47:25.600599876Z" level=info msg="CreateContainer within sandbox \"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:47:25.657086 systemd-networkd[1426]: calicaf85a9dfd1: Link UP Mar 7 01:47:25.657784 systemd-networkd[1426]: calicaf85a9dfd1: Gained carrier Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:23.959 [ERROR][3973] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:24.025 [INFO][3973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0 calico-apiserver-db79945d8- calico-system 1c858c32-39fe-4390-b516-9157137885fe 956 0 2026-03-07 01:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db79945d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com calico-apiserver-db79945d8-2mdgg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calicaf85a9dfd1 [] [] }} ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:24.025 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:24.681 [INFO][4071] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" HandleID="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:24.752 [INFO][4071] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" HandleID="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fde0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"calico-apiserver-db79945d8-2mdgg", "timestamp":"2026-03-07 01:47:24.681511474 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000000840)} Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:24.752 [INFO][4071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.149 [INFO][4071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.149 [INFO][4071] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.296 [INFO][4071] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.353 [INFO][4071] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.420 [INFO][4071] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.477 [INFO][4071] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.489 [INFO][4071] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.489 [INFO][4071] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.498 [INFO][4071] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29 Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.544 [INFO][4071] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.636 [INFO][4071] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.636 [INFO][4071] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.636 [INFO][4071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:25.697787 containerd[1510]: 2026-03-07 01:47:25.636 [INFO][4071] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" HandleID="k8s-pod-network.19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.642 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c858c32-39fe-4390-b516-9157137885fe", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-db79945d8-2mdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicaf85a9dfd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.643 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.195/32] ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.644 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaf85a9dfd1 ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.661 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.665 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c858c32-39fe-4390-b516-9157137885fe", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29", Pod:"calico-apiserver-db79945d8-2mdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicaf85a9dfd1", MAC:"62:91:ef:bd:0b:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:25.699128 containerd[1510]: 2026-03-07 01:47:25.693 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29" Namespace="calico-system" Pod="calico-apiserver-db79945d8-2mdgg" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:25.741180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764020008.mount: Deactivated successfully. Mar 7 01:47:25.750026 containerd[1510]: time="2026-03-07T01:47:25.741626730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c7754466-f58ng,Uid:f9f61d7c-7313-446e-a9f7-2139c66b7a47,Namespace:calico-system,Attempt:0,}" Mar 7 01:47:25.770380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056995338.mount: Deactivated successfully. Mar 7 01:47:25.791693 containerd[1510]: time="2026-03-07T01:47:25.791148646Z" level=info msg="CreateContainer within sandbox \"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c07873c673fc23c0e724d1cdaf395ce11fd66eb52f03edb5aee573d616e41f60\"" Mar 7 01:47:25.794402 containerd[1510]: time="2026-03-07T01:47:25.794334527Z" level=info msg="StartContainer for \"c07873c673fc23c0e724d1cdaf395ce11fd66eb52f03edb5aee573d616e41f60\"" Mar 7 01:47:25.830811 containerd[1510]: time="2026-03-07T01:47:25.820024642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:25.830811 containerd[1510]: time="2026-03-07T01:47:25.825793722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:25.830811 containerd[1510]: time="2026-03-07T01:47:25.825821777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.830811 containerd[1510]: time="2026-03-07T01:47:25.826015908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:25.925933 systemd[1]: Started cri-containerd-19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29.scope - libcontainer container 19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29. Mar 7 01:47:25.939877 systemd-networkd[1426]: calif4c1505b610: Link UP Mar 7 01:47:25.943012 systemd-networkd[1426]: calif4c1505b610: Gained carrier Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.296 [ERROR][4025] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.391 [INFO][4025] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0 calico-kube-controllers-66cf4c95d5- calico-system 501acf6a-f520-4e0a-a473-c29ca7073182 953 0 2026-03-07 01:46:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66cf4c95d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com calico-kube-controllers-66cf4c95d5-xdglp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4c1505b610 [] [] }} ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.393 [INFO][4025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.792 [INFO][4164] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" HandleID="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.983 [INFO][4164] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" HandleID="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001026a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"calico-kube-controllers-66cf4c95d5-xdglp", "timestamp":"2026-03-07 01:47:24.792630981 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003738c0)} Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:24.984 [INFO][4164] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.640 [INFO][4164] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.640 [INFO][4164] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.650 [INFO][4164] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.690 [INFO][4164] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.710 [INFO][4164] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.717 [INFO][4164] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.736 [INFO][4164] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.736 [INFO][4164] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.773 [INFO][4164] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8 Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.792 [INFO][4164] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.855 [INFO][4164] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.855 [INFO][4164] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.855 [INFO][4164] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:26.023152 containerd[1510]: 2026-03-07 01:47:25.855 [INFO][4164] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" HandleID="k8s-pod-network.7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:25.866 [INFO][4025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0", GenerateName:"calico-kube-controllers-66cf4c95d5-", Namespace:"calico-system", SelfLink:"", UID:"501acf6a-f520-4e0a-a473-c29ca7073182", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cf4c95d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-66cf4c95d5-xdglp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4c1505b610", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:25.874 [INFO][4025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.196/32] ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:25.874 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4c1505b610 ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:25.955 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:25.976 [INFO][4025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0", GenerateName:"calico-kube-controllers-66cf4c95d5-", Namespace:"calico-system", SelfLink:"", UID:"501acf6a-f520-4e0a-a473-c29ca7073182", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cf4c95d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8", Pod:"calico-kube-controllers-66cf4c95d5-xdglp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4c1505b610", MAC:"22:22:fd:67:ef:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.026869 containerd[1510]: 2026-03-07 01:47:26.014 [INFO][4025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8" Namespace="calico-system" Pod="calico-kube-controllers-66cf4c95d5-xdglp" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:26.038033 kubelet[2692]: I0307 01:47:26.037787 2692 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bd088e00-80cb-43bf-b69c-b9cab2eccdd4" path="/var/lib/kubelet/pods/bd088e00-80cb-43bf-b69c-b9cab2eccdd4/volumes" Mar 7 01:47:26.052537 containerd[1510]: time="2026-03-07T01:47:26.049089195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7n5xl,Uid:bbd29171-97d8-4573-957e-b074ea425f68,Namespace:calico-system,Attempt:1,} returns sandbox id \"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3\"" Mar 7 01:47:26.050918 systemd[1]: Started cri-containerd-c07873c673fc23c0e724d1cdaf395ce11fd66eb52f03edb5aee573d616e41f60.scope - libcontainer container c07873c673fc23c0e724d1cdaf395ce11fd66eb52f03edb5aee573d616e41f60. Mar 7 01:47:26.084598 containerd[1510]: time="2026-03-07T01:47:26.083756319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:47:26.189507 systemd-networkd[1426]: calic89fde6c13d: Link UP Mar 7 01:47:26.198870 systemd-networkd[1426]: calic89fde6c13d: Gained carrier Mar 7 01:47:26.204283 containerd[1510]: time="2026-03-07T01:47:26.195001971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:26.204283 containerd[1510]: time="2026-03-07T01:47:26.197152603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:26.204283 containerd[1510]: time="2026-03-07T01:47:26.197175373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.204283 containerd[1510]: time="2026-03-07T01:47:26.197362572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:24.488 [ERROR][4125] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:24.752 [INFO][4125] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0 goldmane-9f7667bb8- calico-system 4ac33ec3-af0f-40f4-812b-1f441606b85d 962 0 2026-03-07 01:46:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com goldmane-9f7667bb8-ngb4s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic89fde6c13d [] [] }} ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:24.753 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:24.943 [INFO][4219] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" HandleID="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.030 [INFO][4219] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" HandleID="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"goldmane-9f7667bb8-ngb4s", "timestamp":"2026-03-07 01:47:24.943432317 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000189a20)} Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.031 [INFO][4219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.861 [INFO][4219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.864 [INFO][4219] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.890 [INFO][4219] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:25.952 [INFO][4219] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.007 [INFO][4219] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.034 [INFO][4219] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.058 [INFO][4219] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.061 [INFO][4219] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.073 [INFO][4219] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53 Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.108 [INFO][4219] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.159 [INFO][4219] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.160 [INFO][4219] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.160 [INFO][4219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:26.260695 containerd[1510]: 2026-03-07 01:47:26.160 [INFO][4219] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" HandleID="k8s-pod-network.295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.173 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4ac33ec3-af0f-40f4-812b-1f441606b85d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-9f7667bb8-ngb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic89fde6c13d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.173 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.197/32] ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.173 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic89fde6c13d ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.197 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.198 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4ac33ec3-af0f-40f4-812b-1f441606b85d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53", Pod:"goldmane-9f7667bb8-ngb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic89fde6c13d", MAC:"0a:36:f8:fb:4c:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.264893 containerd[1510]: 2026-03-07 01:47:26.229 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53" Namespace="calico-system" Pod="goldmane-9f7667bb8-ngb4s" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:26.347926 systemd[1]: Started cri-containerd-7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8.scope - libcontainer container 7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8. Mar 7 01:47:26.373465 systemd-networkd[1426]: cali3f11790c577: Gained IPv6LL Mar 7 01:47:26.458821 systemd-networkd[1426]: cali9dd7755f762: Link UP Mar 7 01:47:26.467971 systemd-networkd[1426]: cali9dd7755f762: Gained carrier Mar 7 01:47:26.499509 containerd[1510]: time="2026-03-07T01:47:26.499031582Z" level=info msg="StartContainer for \"c07873c673fc23c0e724d1cdaf395ce11fd66eb52f03edb5aee573d616e41f60\" returns successfully" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:24.362 [ERROR][4014] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:24.471 [INFO][4014] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0 coredns-7d764666f9- kube-system 1520bae0-6d1b-46cf-a4de-5ba1d53dfc61 960 0 2026-03-07 01:46:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com coredns-7d764666f9-jtc9r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9dd7755f762 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:24.472 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:25.032 [INFO][4177] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" HandleID="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:25.073 [INFO][4177] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" HandleID="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000228440), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"coredns-7d764666f9-jtc9r", "timestamp":"2026-03-07 01:47:25.032586209 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114840)} Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:25.073 [INFO][4177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.160 [INFO][4177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.160 [INFO][4177] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.208 [INFO][4177] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.253 [INFO][4177] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.276 [INFO][4177] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.282 [INFO][4177] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.304 [INFO][4177] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.305 [INFO][4177] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.318 [INFO][4177] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5 Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.366 [INFO][4177] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.391 [INFO][4177] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.198/26] block=192.168.122.192/26 handle="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.392 [INFO][4177] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.198/26] handle="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.392 [INFO][4177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:26.546603 containerd[1510]: 2026-03-07 01:47:26.392 [INFO][4177] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.198/26] IPv6=[] ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" HandleID="k8s-pod-network.6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.549864 containerd[1510]: 2026-03-07 01:47:26.420 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7d764666f9-jtc9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dd7755f762", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.549864 containerd[1510]: 2026-03-07 01:47:26.423 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.198/32] ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.549864 containerd[1510]: 2026-03-07 01:47:26.427 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dd7755f762 ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.549864 containerd[1510]: 2026-03-07 01:47:26.458 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.549864 containerd[1510]: 2026-03-07 01:47:26.461 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5", Pod:"coredns-7d764666f9-jtc9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dd7755f762", MAC:"de:58:db:ba:a8:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.550408 containerd[1510]: 2026-03-07 01:47:26.517 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5" Namespace="kube-system" Pod="coredns-7d764666f9-jtc9r" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:26.606870 containerd[1510]: time="2026-03-07T01:47:26.581553572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:26.606870 containerd[1510]: time="2026-03-07T01:47:26.581713373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:26.606870 containerd[1510]: time="2026-03-07T01:47:26.581735541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.606870 containerd[1510]: time="2026-03-07T01:47:26.581884458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.625804 systemd-networkd[1426]: cali68a7e93ea3c: Gained IPv6LL Mar 7 01:47:26.664687 systemd-networkd[1426]: cali15404a66e3c: Link UP Mar 7 01:47:26.680162 systemd-networkd[1426]: cali15404a66e3c: Gained carrier Mar 7 01:47:26.812909 systemd[1]: Started cri-containerd-295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53.scope - libcontainer container 295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53. Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:24.447 [ERROR][4039] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:24.529 [INFO][4039] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0 calico-apiserver-db79945d8- calico-system 1c47937c-9a38-4db0-80fd-8afe55163ff8 961 0 2026-03-07 01:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db79945d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com calico-apiserver-db79945d8-9m2h4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali15404a66e3c [] [] }} ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:24.529 [INFO][4039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:25.047 [INFO][4198] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" HandleID="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:25.100 [INFO][4198] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" HandleID="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e110), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"calico-apiserver-db79945d8-9m2h4", "timestamp":"2026-03-07 01:47:25.047943058 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00066a000)} Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:25.101 [INFO][4198] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.399 [INFO][4198] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.399 [INFO][4198] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.418 [INFO][4198] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.451 [INFO][4198] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.493 [INFO][4198] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.504 [INFO][4198] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.514 [INFO][4198] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.518 [INFO][4198] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.524 [INFO][4198] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090 Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.551 [INFO][4198] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.575 [INFO][4198] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.199/26] block=192.168.122.192/26 handle="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.575 [INFO][4198] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.199/26] handle="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.575 [INFO][4198] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:26.827081 containerd[1510]: 2026-03-07 01:47:26.575 [INFO][4198] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.199/26] IPv6=[] ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" HandleID="k8s-pod-network.4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.621 [INFO][4039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c47937c-9a38-4db0-80fd-8afe55163ff8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-db79945d8-9m2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15404a66e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.621 [INFO][4039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.199/32] ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.621 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15404a66e3c ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.705 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.707 [INFO][4039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c47937c-9a38-4db0-80fd-8afe55163ff8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090", Pod:"calico-apiserver-db79945d8-9m2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15404a66e3c", MAC:"3e:b7:85:66:6a:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:26.829260 containerd[1510]: 2026-03-07 01:47:26.731 [INFO][4039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090" Namespace="calico-system" Pod="calico-apiserver-db79945d8-9m2h4" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:26.870221 containerd[1510]: time="2026-03-07T01:47:26.868434523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:26.870221 containerd[1510]: time="2026-03-07T01:47:26.868550666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:26.870221 containerd[1510]: time="2026-03-07T01:47:26.868571563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.870221 containerd[1510]: time="2026-03-07T01:47:26.869798875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:26.909818 systemd-networkd[1426]: cali77127c9ba27: Link UP Mar 7 01:47:26.922560 systemd-networkd[1426]: cali77127c9ba27: Gained carrier Mar 7 01:47:26.980883 kubelet[2692]: I0307 01:47:26.978628 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lckw9" podStartSLOduration=53.978259692 podStartE2EDuration="53.978259692s" podCreationTimestamp="2026-03-07 01:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:47:26.973295911 +0000 UTC m=+59.286733269" watchObservedRunningTime="2026-03-07 01:47:26.978259692 +0000 UTC m=+59.291697022" Mar 7 01:47:27.008979 systemd-networkd[1426]: calif4c1505b610: Gained IPv6LL Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.148 [ERROR][4375] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.232 [INFO][4375] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0 whisker-85c7754466- calico-system f9f61d7c-7313-446e-a9f7-2139c66b7a47 994 0 2026-03-07 01:47:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85c7754466 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-wuc9t.gb1.brightbox.com whisker-85c7754466-f58ng eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali77127c9ba27 [] [] }} ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.233 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.588 [INFO][4454] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" HandleID="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.617 [INFO][4454] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" HandleID="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2120), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-wuc9t.gb1.brightbox.com", "pod":"whisker-85c7754466-f58ng", "timestamp":"2026-03-07 01:47:26.588191018 +0000 UTC"}, Hostname:"srv-wuc9t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114420)} Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.618 [INFO][4454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.624 [INFO][4454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.625 [INFO][4454] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-wuc9t.gb1.brightbox.com' Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.655 [INFO][4454] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.722 [INFO][4454] ipam/ipam.go 409: Looking up existing affinities for host host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.751 [INFO][4454] ipam/ipam.go 526: Trying affinity for 192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.761 [INFO][4454] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.768 [INFO][4454] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.768 [INFO][4454] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.781 [INFO][4454] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17 Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.803 [INFO][4454] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.837 [INFO][4454] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.200/26] block=192.168.122.192/26 handle="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.837 [INFO][4454] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.200/26] handle="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" host="srv-wuc9t.gb1.brightbox.com" Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.837 [INFO][4454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:27.035267 containerd[1510]: 2026-03-07 01:47:26.837 [INFO][4454] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.200/26] IPv6=[] ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" HandleID="k8s-pod-network.fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.878 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0", GenerateName:"whisker-85c7754466-", Namespace:"calico-system", SelfLink:"", UID:"f9f61d7c-7313-446e-a9f7-2139c66b7a47", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85c7754466", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"", Pod:"whisker-85c7754466-f58ng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77127c9ba27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.881 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.200/32] ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.881 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77127c9ba27 ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.921 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.955 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0", GenerateName:"whisker-85c7754466-", Namespace:"calico-system", SelfLink:"", UID:"f9f61d7c-7313-446e-a9f7-2139c66b7a47", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85c7754466", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17", Pod:"whisker-85c7754466-f58ng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77127c9ba27", MAC:"26:b7:4d:8c:6f:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:27.038982 containerd[1510]: 2026-03-07 01:47:26.995 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17" Namespace="calico-system" Pod="whisker-85c7754466-f58ng" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--85c7754466--f58ng-eth0" Mar 7 01:47:27.071948 systemd[1]: Started cri-containerd-6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5.scope - libcontainer container 6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5. Mar 7 01:47:27.085562 containerd[1510]: time="2026-03-07T01:47:27.085356403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-2mdgg,Uid:1c858c32-39fe-4390-b516-9157137885fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29\"" Mar 7 01:47:27.200182 containerd[1510]: time="2026-03-07T01:47:27.199974702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:27.207376 containerd[1510]: time="2026-03-07T01:47:27.200933356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:27.207376 containerd[1510]: time="2026-03-07T01:47:27.200964178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:27.207376 containerd[1510]: time="2026-03-07T01:47:27.206493029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:27.250249 containerd[1510]: time="2026-03-07T01:47:27.249816612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:47:27.250249 containerd[1510]: time="2026-03-07T01:47:27.249929134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:47:27.250249 containerd[1510]: time="2026-03-07T01:47:27.249950492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:27.250249 containerd[1510]: time="2026-03-07T01:47:27.250095562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:47:27.360253 systemd[1]: Started cri-containerd-fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17.scope - libcontainer container fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17. Mar 7 01:47:27.415810 containerd[1510]: time="2026-03-07T01:47:27.415074714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtc9r,Uid:1520bae0-6d1b-46cf-a4de-5ba1d53dfc61,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5\"" Mar 7 01:47:27.423923 systemd[1]: Started cri-containerd-4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090.scope - libcontainer container 4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090. Mar 7 01:47:27.454293 containerd[1510]: time="2026-03-07T01:47:27.452216997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ngb4s,Uid:4ac33ec3-af0f-40f4-812b-1f441606b85d,Namespace:calico-system,Attempt:1,} returns sandbox id \"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53\"" Mar 7 01:47:27.488753 containerd[1510]: time="2026-03-07T01:47:27.488645980Z" level=info msg="CreateContainer within sandbox \"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:47:27.494012 containerd[1510]: time="2026-03-07T01:47:27.492582206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cf4c95d5-xdglp,Uid:501acf6a-f520-4e0a-a473-c29ca7073182,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8\"" Mar 7 01:47:27.557130 containerd[1510]: time="2026-03-07T01:47:27.556981723Z" level=info msg="CreateContainer within sandbox \"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c4ac1832f9b5cd399921a9049e41ca2bd8d1c114593ecff40fc17ff1a3879f2\"" Mar 7 01:47:27.558894 containerd[1510]: time="2026-03-07T01:47:27.558857719Z" level=info msg="StartContainer for \"9c4ac1832f9b5cd399921a9049e41ca2bd8d1c114593ecff40fc17ff1a3879f2\"" Mar 7 01:47:27.652918 systemd[1]: Started cri-containerd-9c4ac1832f9b5cd399921a9049e41ca2bd8d1c114593ecff40fc17ff1a3879f2.scope - libcontainer container 9c4ac1832f9b5cd399921a9049e41ca2bd8d1c114593ecff40fc17ff1a3879f2. Mar 7 01:47:27.713385 systemd-networkd[1426]: calicaf85a9dfd1: Gained IPv6LL Mar 7 01:47:27.749475 containerd[1510]: time="2026-03-07T01:47:27.748431097Z" level=info msg="StartContainer for \"9c4ac1832f9b5cd399921a9049e41ca2bd8d1c114593ecff40fc17ff1a3879f2\" returns successfully" Mar 7 01:47:27.756585 containerd[1510]: time="2026-03-07T01:47:27.755516881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c7754466-f58ng,Uid:f9f61d7c-7313-446e-a9f7-2139c66b7a47,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17\"" Mar 7 01:47:27.776842 systemd-networkd[1426]: cali15404a66e3c: Gained IPv6LL Mar 7 01:47:27.790311 containerd[1510]: time="2026-03-07T01:47:27.787885326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db79945d8-9m2h4,Uid:1c47937c-9a38-4db0-80fd-8afe55163ff8,Namespace:calico-system,Attempt:1,} returns sandbox id \"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090\"" Mar 7 01:47:27.843718 systemd-networkd[1426]: cali9dd7755f762: Gained IPv6LL Mar 7 01:47:27.969702 systemd-networkd[1426]: calic89fde6c13d: Gained IPv6LL Mar 7 01:47:27.985295 containerd[1510]: time="2026-03-07T01:47:27.984443508Z" level=info msg="StopPodSandbox for \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\"" Mar 7 01:47:28.188692 kubelet[2692]: I0307 01:47:28.184235 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jtc9r" podStartSLOduration=55.184214818 podStartE2EDuration="55.184214818s" podCreationTimestamp="2026-03-07 01:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:47:28.089206809 +0000 UTC m=+60.402644138" watchObservedRunningTime="2026-03-07 01:47:28.184214818 +0000 UTC m=+60.497652140" Mar 7 01:47:28.224894 systemd-networkd[1426]: cali77127c9ba27: Gained IPv6LL Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.214 [WARNING][4745] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b4861714-eb6d-4d00-95f5-6b972ed0a0fc", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397", Pod:"coredns-7d764666f9-lckw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f11790c577", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.217 [INFO][4745] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.218 [INFO][4745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" iface="eth0" netns="" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.219 [INFO][4745] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.219 [INFO][4745] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.363 [INFO][4758] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.364 [INFO][4758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.366 [INFO][4758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.392 [WARNING][4758] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.392 [INFO][4758] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.395 [INFO][4758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:28.408576 containerd[1510]: 2026-03-07 01:47:28.403 [INFO][4745] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.426833 containerd[1510]: time="2026-03-07T01:47:28.409797162Z" level=info msg="TearDown network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" successfully" Mar 7 01:47:28.426833 containerd[1510]: time="2026-03-07T01:47:28.409839595Z" level=info msg="StopPodSandbox for \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" returns successfully" Mar 7 01:47:28.426833 containerd[1510]: time="2026-03-07T01:47:28.413281859Z" level=info msg="RemovePodSandbox for \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\"" Mar 7 01:47:28.426833 containerd[1510]: time="2026-03-07T01:47:28.413341720Z" level=info msg="Forcibly stopping sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\"" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.540 [WARNING][4786] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b4861714-eb6d-4d00-95f5-6b972ed0a0fc", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"40e2f4cb0a75d8d104d35320b08a7e94a7d57185d513187ff75d36182be37397", Pod:"coredns-7d764666f9-lckw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f11790c577", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.542 [INFO][4786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.542 [INFO][4786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" iface="eth0" netns="" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.542 [INFO][4786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.542 [INFO][4786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.664 [INFO][4798] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.674 [INFO][4798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.679 [INFO][4798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.726 [WARNING][4798] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.726 [INFO][4798] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" HandleID="k8s-pod-network.4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--lckw9-eth0" Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.740 [INFO][4798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:28.758232 containerd[1510]: 2026-03-07 01:47:28.746 [INFO][4786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1" Mar 7 01:47:28.762282 containerd[1510]: time="2026-03-07T01:47:28.758324827Z" level=info msg="TearDown network for sandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" successfully" Mar 7 01:47:28.767693 kernel: calico-node[4215]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:47:28.782688 containerd[1510]: time="2026-03-07T01:47:28.781613940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:28.782688 containerd[1510]: time="2026-03-07T01:47:28.781814099Z" level=info msg="RemovePodSandbox \"4834ce525d9adf2b264d12560a9872dc48d00bbb86ad3a4419d0de10bd76f0c1\" returns successfully" Mar 7 01:47:28.787050 containerd[1510]: time="2026-03-07T01:47:28.785214097Z" level=info msg="StopPodSandbox for \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\"" Mar 7 01:47:30.688728 containerd[1510]: time="2026-03-07T01:47:30.673731809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:47:30.728538 containerd[1510]: time="2026-03-07T01:47:30.726825549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:29.241 [WARNING][4829] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c858c32-39fe-4390-b516-9157137885fe", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29", Pod:"calico-apiserver-db79945d8-2mdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicaf85a9dfd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:29.337 [INFO][4829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:29.338 [INFO][4829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" iface="eth0" netns="" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:29.338 [INFO][4829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:29.338 [INFO][4829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.629 [INFO][4839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.640 [INFO][4839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.657 [INFO][4839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.711 [WARNING][4839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.711 [INFO][4839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.724 [INFO][4839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:30.829951 containerd[1510]: 2026-03-07 01:47:30.733 [INFO][4829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:30.866407 containerd[1510]: time="2026-03-07T01:47:30.866335183Z" level=info msg="TearDown network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" successfully" Mar 7 01:47:30.867680 containerd[1510]: time="2026-03-07T01:47:30.866682675Z" level=info msg="StopPodSandbox for \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" returns successfully" Mar 7 01:47:30.876460 containerd[1510]: time="2026-03-07T01:47:30.876406949Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:30.916694 containerd[1510]: time="2026-03-07T01:47:30.915563180Z" level=info msg="RemovePodSandbox for \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\"" Mar 7 01:47:30.922182 containerd[1510]: time="2026-03-07T01:47:30.920605948Z" level=info msg="Forcibly stopping sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\"" Mar 7 01:47:31.076873 containerd[1510]: time="2026-03-07T01:47:31.076685376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:31.085690 containerd[1510]: time="2026-03-07T01:47:31.085212649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 5.001361241s" Mar 7 01:47:31.085690 containerd[1510]: time="2026-03-07T01:47:31.085349653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:47:31.113099 containerd[1510]: time="2026-03-07T01:47:31.112724089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:47:31.297533 containerd[1510]: time="2026-03-07T01:47:31.297396852Z" level=info msg="CreateContainer within sandbox \"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:47:31.436695 containerd[1510]: time="2026-03-07T01:47:31.436414283Z" level=info msg="CreateContainer within sandbox \"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bbb5a9bea7bb93bc5762026473b494c84e29bc12b1e0217f2f5df1d4e866dbab\"" Mar 7 01:47:31.440115 containerd[1510]: time="2026-03-07T01:47:31.440082133Z" level=info msg="StartContainer for \"bbb5a9bea7bb93bc5762026473b494c84e29bc12b1e0217f2f5df1d4e866dbab\"" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.483 [WARNING][4859] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c858c32-39fe-4390-b516-9157137885fe", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29", Pod:"calico-apiserver-db79945d8-2mdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicaf85a9dfd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.483 [INFO][4859] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.483 [INFO][4859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" iface="eth0" netns="" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.483 [INFO][4859] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.483 [INFO][4859] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.579 [INFO][4878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.581 [INFO][4878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.581 [INFO][4878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.613 [WARNING][4878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.613 [INFO][4878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" HandleID="k8s-pod-network.7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--2mdgg-eth0" Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.618 [INFO][4878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:31.645187 containerd[1510]: 2026-03-07 01:47:31.630 [INFO][4859] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391" Mar 7 01:47:31.648085 containerd[1510]: time="2026-03-07T01:47:31.645230307Z" level=info msg="TearDown network for sandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" successfully" Mar 7 01:47:31.664655 containerd[1510]: time="2026-03-07T01:47:31.664581655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:31.665184 containerd[1510]: time="2026-03-07T01:47:31.664688779Z" level=info msg="RemovePodSandbox \"7410500aaafb0e72c3418af1973df9bf61ab1030ae3bc0e4994c57a8a8727391\" returns successfully" Mar 7 01:47:31.668198 containerd[1510]: time="2026-03-07T01:47:31.667334402Z" level=info msg="StopPodSandbox for \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\"" Mar 7 01:47:31.799983 systemd[1]: Started cri-containerd-bbb5a9bea7bb93bc5762026473b494c84e29bc12b1e0217f2f5df1d4e866dbab.scope - libcontainer container bbb5a9bea7bb93bc5762026473b494c84e29bc12b1e0217f2f5df1d4e866dbab. Mar 7 01:47:31.849531 systemd-networkd[1426]: vxlan.calico: Link UP Mar 7 01:47:31.849547 systemd-networkd[1426]: vxlan.calico: Gained carrier Mar 7 01:47:32.087991 containerd[1510]: time="2026-03-07T01:47:32.087555115Z" level=info msg="StartContainer for \"bbb5a9bea7bb93bc5762026473b494c84e29bc12b1e0217f2f5df1d4e866dbab\" returns successfully" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:31.882 [WARNING][4902] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0", GenerateName:"calico-kube-controllers-66cf4c95d5-", Namespace:"calico-system", SelfLink:"", UID:"501acf6a-f520-4e0a-a473-c29ca7073182", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cf4c95d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8", Pod:"calico-kube-controllers-66cf4c95d5-xdglp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4c1505b610", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:31.882 [INFO][4902] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:31.882 [INFO][4902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" iface="eth0" netns="" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:31.882 [INFO][4902] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:31.882 [INFO][4902] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.040 [INFO][4930] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.040 [INFO][4930] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.040 [INFO][4930] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.062 [WARNING][4930] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.062 [INFO][4930] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.065 [INFO][4930] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:32.101298 containerd[1510]: 2026-03-07 01:47:32.082 [INFO][4902] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.101298 containerd[1510]: time="2026-03-07T01:47:32.100891736Z" level=info msg="TearDown network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" successfully" Mar 7 01:47:32.101298 containerd[1510]: time="2026-03-07T01:47:32.100929904Z" level=info msg="StopPodSandbox for \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" returns successfully" Mar 7 01:47:32.104148 containerd[1510]: time="2026-03-07T01:47:32.103045616Z" level=info msg="RemovePodSandbox for \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\"" Mar 7 01:47:32.104148 containerd[1510]: time="2026-03-07T01:47:32.103092866Z" level=info msg="Forcibly stopping sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\"" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.225 [WARNING][4980] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0", GenerateName:"calico-kube-controllers-66cf4c95d5-", Namespace:"calico-system", SelfLink:"", UID:"501acf6a-f520-4e0a-a473-c29ca7073182", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cf4c95d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8", Pod:"calico-kube-controllers-66cf4c95d5-xdglp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4c1505b610", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.226 [INFO][4980] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.226 [INFO][4980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" iface="eth0" netns="" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.226 [INFO][4980] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.226 [INFO][4980] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.260 [INFO][4987] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.260 [INFO][4987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.261 [INFO][4987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.272 [WARNING][4987] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.272 [INFO][4987] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" HandleID="k8s-pod-network.3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--kube--controllers--66cf4c95d5--xdglp-eth0" Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.276 [INFO][4987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:32.285147 containerd[1510]: 2026-03-07 01:47:32.278 [INFO][4980] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b" Mar 7 01:47:32.286869 containerd[1510]: time="2026-03-07T01:47:32.285710261Z" level=info msg="TearDown network for sandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" successfully" Mar 7 01:47:32.295765 containerd[1510]: time="2026-03-07T01:47:32.295653677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:32.296030 containerd[1510]: time="2026-03-07T01:47:32.295982250Z" level=info msg="RemovePodSandbox \"3e89586e00c9a95f18185f16be167c24d3a8cb93b38d71a42606f413b3643c3b\" returns successfully" Mar 7 01:47:32.297373 containerd[1510]: time="2026-03-07T01:47:32.297333735Z" level=info msg="StopPodSandbox for \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\"" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.364 [WARNING][5001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbd29171-97d8-4573-957e-b074ea425f68", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3", Pod:"csi-node-driver-7n5xl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68a7e93ea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.364 [INFO][5001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.364 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" iface="eth0" netns="" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.364 [INFO][5001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.364 [INFO][5001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.411 [INFO][5009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.412 [INFO][5009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.413 [INFO][5009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.426 [WARNING][5009] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.426 [INFO][5009] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.428 [INFO][5009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:32.433729 containerd[1510]: 2026-03-07 01:47:32.431 [INFO][5001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.436032 containerd[1510]: time="2026-03-07T01:47:32.433806711Z" level=info msg="TearDown network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" successfully" Mar 7 01:47:32.436032 containerd[1510]: time="2026-03-07T01:47:32.433844804Z" level=info msg="StopPodSandbox for \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" returns successfully" Mar 7 01:47:32.436032 containerd[1510]: time="2026-03-07T01:47:32.434605245Z" level=info msg="RemovePodSandbox for \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\"" Mar 7 01:47:32.436032 containerd[1510]: time="2026-03-07T01:47:32.434645524Z" level=info msg="Forcibly stopping sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\"" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.528 [WARNING][5023] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbd29171-97d8-4573-957e-b074ea425f68", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3", Pod:"csi-node-driver-7n5xl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68a7e93ea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.533 [INFO][5023] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.533 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" iface="eth0" netns="" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.533 [INFO][5023] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.533 [INFO][5023] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.595 [INFO][5036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.596 [INFO][5036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.596 [INFO][5036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.607 [WARNING][5036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.607 [INFO][5036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" HandleID="k8s-pod-network.078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Workload="srv--wuc9t.gb1.brightbox.com-k8s-csi--node--driver--7n5xl-eth0" Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.611 [INFO][5036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:32.625822 containerd[1510]: 2026-03-07 01:47:32.619 [INFO][5023] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae" Mar 7 01:47:32.627407 containerd[1510]: time="2026-03-07T01:47:32.626083325Z" level=info msg="TearDown network for sandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" successfully" Mar 7 01:47:32.634894 containerd[1510]: time="2026-03-07T01:47:32.634809333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:32.635048 containerd[1510]: time="2026-03-07T01:47:32.634976593Z" level=info msg="RemovePodSandbox \"078026226bf4f5be889dae28a0b049faebb2bc480553dae2a4d01d4c126012ae\" returns successfully" Mar 7 01:47:32.637079 containerd[1510]: time="2026-03-07T01:47:32.636518667Z" level=info msg="StopPodSandbox for \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\"" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.739 [WARNING][5065] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5", Pod:"coredns-7d764666f9-jtc9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dd7755f762", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.739 [INFO][5065] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.739 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" iface="eth0" netns="" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.739 [INFO][5065] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.739 [INFO][5065] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.864 [INFO][5080] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.867 [INFO][5080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.867 [INFO][5080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.905 [WARNING][5080] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.905 [INFO][5080] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.912 [INFO][5080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:32.926781 containerd[1510]: 2026-03-07 01:47:32.921 [INFO][5065] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:32.927626 containerd[1510]: time="2026-03-07T01:47:32.927078096Z" level=info msg="TearDown network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" successfully" Mar 7 01:47:32.927626 containerd[1510]: time="2026-03-07T01:47:32.927131175Z" level=info msg="StopPodSandbox for \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" returns successfully" Mar 7 01:47:32.930470 containerd[1510]: time="2026-03-07T01:47:32.929032617Z" level=info msg="RemovePodSandbox for \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\"" Mar 7 01:47:32.930470 containerd[1510]: time="2026-03-07T01:47:32.929369425Z" level=info msg="Forcibly stopping sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\"" Mar 7 01:47:33.217200 systemd-networkd[1426]: vxlan.calico: Gained IPv6LL Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.073 [WARNING][5104] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1520bae0-6d1b-46cf-a4de-5ba1d53dfc61", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"6b76759a5485070c034fad5a4410f346aca6121403ac048a550c5bde7df3a8a5", Pod:"coredns-7d764666f9-jtc9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dd7755f762", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.075 [INFO][5104] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.076 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" iface="eth0" netns="" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.076 [INFO][5104] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.076 [INFO][5104] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.194 [INFO][5119] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.194 [INFO][5119] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.194 [INFO][5119] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.220 [WARNING][5119] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.221 [INFO][5119] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" HandleID="k8s-pod-network.9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Workload="srv--wuc9t.gb1.brightbox.com-k8s-coredns--7d764666f9--jtc9r-eth0" Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.225 [INFO][5119] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:33.239684 containerd[1510]: 2026-03-07 01:47:33.232 [INFO][5104] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad" Mar 7 01:47:33.239684 containerd[1510]: time="2026-03-07T01:47:33.238394544Z" level=info msg="TearDown network for sandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" successfully" Mar 7 01:47:33.252461 containerd[1510]: time="2026-03-07T01:47:33.251898372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:33.252813 containerd[1510]: time="2026-03-07T01:47:33.252758865Z" level=info msg="RemovePodSandbox \"9a55e3cb66af5971bf6fd439e2a0363c6f8f193c2aad42f52ebb251c816f98ad\" returns successfully" Mar 7 01:47:33.254138 containerd[1510]: time="2026-03-07T01:47:33.253963315Z" level=info msg="StopPodSandbox for \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\"" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.347 [WARNING][5140] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4ac33ec3-af0f-40f4-812b-1f441606b85d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53", Pod:"goldmane-9f7667bb8-ngb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic89fde6c13d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.348 [INFO][5140] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.348 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" iface="eth0" netns="" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.348 [INFO][5140] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.348 [INFO][5140] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.408 [INFO][5151] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.409 [INFO][5151] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.409 [INFO][5151] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.429 [WARNING][5151] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.431 [INFO][5151] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.434 [INFO][5151] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:33.440769 containerd[1510]: 2026-03-07 01:47:33.437 [INFO][5140] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.443381 containerd[1510]: time="2026-03-07T01:47:33.441418805Z" level=info msg="TearDown network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" successfully" Mar 7 01:47:33.443381 containerd[1510]: time="2026-03-07T01:47:33.441468291Z" level=info msg="StopPodSandbox for \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" returns successfully" Mar 7 01:47:33.443381 containerd[1510]: time="2026-03-07T01:47:33.442848261Z" level=info msg="RemovePodSandbox for \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\"" Mar 7 01:47:33.443381 containerd[1510]: time="2026-03-07T01:47:33.442897856Z" level=info msg="Forcibly stopping sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\"" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.519 [WARNING][5165] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4ac33ec3-af0f-40f4-812b-1f441606b85d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53", Pod:"goldmane-9f7667bb8-ngb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic89fde6c13d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.519 [INFO][5165] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.519 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" iface="eth0" netns="" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.519 [INFO][5165] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.519 [INFO][5165] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.560 [INFO][5172] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.560 [INFO][5172] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.560 [INFO][5172] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.571 [WARNING][5172] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.572 [INFO][5172] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" HandleID="k8s-pod-network.e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Workload="srv--wuc9t.gb1.brightbox.com-k8s-goldmane--9f7667bb8--ngb4s-eth0" Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.574 [INFO][5172] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:33.580940 containerd[1510]: 2026-03-07 01:47:33.577 [INFO][5165] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7" Mar 7 01:47:33.580940 containerd[1510]: time="2026-03-07T01:47:33.579841519Z" level=info msg="TearDown network for sandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" successfully" Mar 7 01:47:33.584307 containerd[1510]: time="2026-03-07T01:47:33.584204925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:33.584389 containerd[1510]: time="2026-03-07T01:47:33.584331931Z" level=info msg="RemovePodSandbox \"e552ea887dfe764dd27cb1a9209d8f77cd058cc07ca28f62020d0da35ba014b7\" returns successfully" Mar 7 01:47:33.585993 containerd[1510]: time="2026-03-07T01:47:33.585961182Z" level=info msg="StopPodSandbox for \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\"" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.668 [WARNING][5186] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c47937c-9a38-4db0-80fd-8afe55163ff8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090", Pod:"calico-apiserver-db79945d8-9m2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15404a66e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.669 [INFO][5186] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.669 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" iface="eth0" netns="" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.669 [INFO][5186] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.669 [INFO][5186] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.733 [INFO][5193] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.733 [INFO][5193] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.734 [INFO][5193] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.755 [WARNING][5193] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.756 [INFO][5193] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.761 [INFO][5193] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:33.769367 containerd[1510]: 2026-03-07 01:47:33.765 [INFO][5186] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.771231 containerd[1510]: time="2026-03-07T01:47:33.770290781Z" level=info msg="TearDown network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" successfully" Mar 7 01:47:33.771231 containerd[1510]: time="2026-03-07T01:47:33.770377091Z" level=info msg="StopPodSandbox for \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" returns successfully" Mar 7 01:47:33.772701 containerd[1510]: time="2026-03-07T01:47:33.772523776Z" level=info msg="RemovePodSandbox for \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\"" Mar 7 01:47:33.772701 containerd[1510]: time="2026-03-07T01:47:33.772617149Z" level=info msg="Forcibly stopping sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\"" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.863 [WARNING][5207] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0", GenerateName:"calico-apiserver-db79945d8-", Namespace:"calico-system", SelfLink:"", UID:"1c47937c-9a38-4db0-80fd-8afe55163ff8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db79945d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-wuc9t.gb1.brightbox.com", ContainerID:"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090", Pod:"calico-apiserver-db79945d8-9m2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15404a66e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.863 [INFO][5207] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.864 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" iface="eth0" netns="" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.864 [INFO][5207] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.864 [INFO][5207] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.915 [INFO][5214] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.916 [INFO][5214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.916 [INFO][5214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.929 [WARNING][5214] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.929 [INFO][5214] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" HandleID="k8s-pod-network.8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Workload="srv--wuc9t.gb1.brightbox.com-k8s-calico--apiserver--db79945d8--9m2h4-eth0" Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.932 [INFO][5214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:33.939531 containerd[1510]: 2026-03-07 01:47:33.935 [INFO][5207] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857" Mar 7 01:47:33.940511 containerd[1510]: time="2026-03-07T01:47:33.939577264Z" level=info msg="TearDown network for sandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" successfully" Mar 7 01:47:33.945958 containerd[1510]: time="2026-03-07T01:47:33.945615018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:33.945958 containerd[1510]: time="2026-03-07T01:47:33.945778217Z" level=info msg="RemovePodSandbox \"8074721ce02dc1056ce0c6f74ceb4849f5555cf12d30f0cf1195d5bb611df857\" returns successfully" Mar 7 01:47:33.947261 containerd[1510]: time="2026-03-07T01:47:33.946809225Z" level=info msg="StopPodSandbox for \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\"" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.031 [WARNING][5228] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.032 [INFO][5228] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.032 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" iface="eth0" netns="" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.032 [INFO][5228] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.032 [INFO][5228] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.102 [INFO][5235] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.102 [INFO][5235] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.103 [INFO][5235] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.121 [WARNING][5235] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.122 [INFO][5235] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.125 [INFO][5235] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:34.133959 containerd[1510]: 2026-03-07 01:47:34.130 [INFO][5228] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.135001 containerd[1510]: time="2026-03-07T01:47:34.134780878Z" level=info msg="TearDown network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" successfully" Mar 7 01:47:34.135001 containerd[1510]: time="2026-03-07T01:47:34.134830301Z" level=info msg="StopPodSandbox for \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" returns successfully" Mar 7 01:47:34.135959 containerd[1510]: time="2026-03-07T01:47:34.135548188Z" level=info msg="RemovePodSandbox for \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\"" Mar 7 01:47:34.135959 containerd[1510]: time="2026-03-07T01:47:34.135592197Z" level=info msg="Forcibly stopping sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\"" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.227 [WARNING][5251] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" WorkloadEndpoint="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.228 [INFO][5251] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.228 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" iface="eth0" netns="" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.228 [INFO][5251] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.228 [INFO][5251] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.278 [INFO][5258] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.279 [INFO][5258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.279 [INFO][5258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.293 [WARNING][5258] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.294 [INFO][5258] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" HandleID="k8s-pod-network.033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Workload="srv--wuc9t.gb1.brightbox.com-k8s-whisker--697ddb4977--hhg2p-eth0" Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.305 [INFO][5258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:47:34.313484 containerd[1510]: 2026-03-07 01:47:34.309 [INFO][5251] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007" Mar 7 01:47:34.316630 containerd[1510]: time="2026-03-07T01:47:34.313827998Z" level=info msg="TearDown network for sandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" successfully" Mar 7 01:47:34.323785 containerd[1510]: time="2026-03-07T01:47:34.323740617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:47:34.324014 containerd[1510]: time="2026-03-07T01:47:34.323972232Z" level=info msg="RemovePodSandbox \"033cb21f2f8ec2e019f0b96ca44c85cac9b1a576d9221bc0256a751695cce007\" returns successfully" Mar 7 01:47:36.191124 containerd[1510]: time="2026-03-07T01:47:36.191012839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:36.195020 containerd[1510]: time="2026-03-07T01:47:36.194937089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:47:36.204243 containerd[1510]: time="2026-03-07T01:47:36.204167016Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:36.208694 containerd[1510]: time="2026-03-07T01:47:36.208211758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:36.210896 containerd[1510]: time="2026-03-07T01:47:36.209947407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.096455374s" Mar 7 01:47:36.211280 containerd[1510]: time="2026-03-07T01:47:36.211234369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:47:36.214040 containerd[1510]: time="2026-03-07T01:47:36.213996246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:47:36.229818 containerd[1510]: time="2026-03-07T01:47:36.229753085Z" level=info msg="CreateContainer within sandbox \"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:47:36.265849 containerd[1510]: time="2026-03-07T01:47:36.265784892Z" level=info msg="CreateContainer within sandbox \"19cee298bd70f5b6280fb7d4ca4a794d13471cc46bab79d75654e21d163f2f29\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d5f4bb6b6db5308df72e3ac5a7a4a59014ed8587c26eff6032c9a1d9ab2e144\"" Mar 7 01:47:36.267903 containerd[1510]: time="2026-03-07T01:47:36.267869690Z" level=info msg="StartContainer for \"2d5f4bb6b6db5308df72e3ac5a7a4a59014ed8587c26eff6032c9a1d9ab2e144\"" Mar 7 01:47:36.532986 systemd[1]: Started cri-containerd-2d5f4bb6b6db5308df72e3ac5a7a4a59014ed8587c26eff6032c9a1d9ab2e144.scope - libcontainer container 2d5f4bb6b6db5308df72e3ac5a7a4a59014ed8587c26eff6032c9a1d9ab2e144. Mar 7 01:47:36.626282 containerd[1510]: time="2026-03-07T01:47:36.626177889Z" level=info msg="StartContainer for \"2d5f4bb6b6db5308df72e3ac5a7a4a59014ed8587c26eff6032c9a1d9ab2e144\" returns successfully" Mar 7 01:47:37.261122 kubelet[2692]: I0307 01:47:37.258024 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-db79945d8-2mdgg" podStartSLOduration=36.131520422 podStartE2EDuration="45.250637397s" podCreationTimestamp="2026-03-07 01:46:52 +0000 UTC" firstStartedPulling="2026-03-07 01:47:27.093853104 +0000 UTC m=+59.407290414" lastFinishedPulling="2026-03-07 01:47:36.212970062 +0000 UTC m=+68.526407389" observedRunningTime="2026-03-07 01:47:37.235599289 +0000 UTC m=+69.549036609" watchObservedRunningTime="2026-03-07 01:47:37.250637397 +0000 UTC m=+69.564074718" Mar 7 01:47:38.184482 kubelet[2692]: I0307 01:47:38.184212 2692 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:47:39.523497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226109994.mount: Deactivated successfully. Mar 7 01:47:40.385973 containerd[1510]: time="2026-03-07T01:47:40.385777237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:40.423812 containerd[1510]: time="2026-03-07T01:47:40.423624842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:47:40.428181 containerd[1510]: time="2026-03-07T01:47:40.428033043Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:40.433461 containerd[1510]: time="2026-03-07T01:47:40.432814657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:40.442253 containerd[1510]: time="2026-03-07T01:47:40.442186268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.228117626s" Mar 7 01:47:40.442505 containerd[1510]: time="2026-03-07T01:47:40.442276837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:47:40.457411 containerd[1510]: time="2026-03-07T01:47:40.457118368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:47:40.467315 containerd[1510]: time="2026-03-07T01:47:40.467244180Z" level=info msg="CreateContainer within sandbox \"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:47:40.652141 containerd[1510]: time="2026-03-07T01:47:40.649983072Z" level=info msg="CreateContainer within sandbox \"295db6615c0babc7c2f274a4e39de2e4e08c682c671d40850821399fc982be53\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e\"" Mar 7 01:47:40.654940 containerd[1510]: time="2026-03-07T01:47:40.654901103Z" level=info msg="StartContainer for \"d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e\"" Mar 7 01:47:40.796992 systemd[1]: Started cri-containerd-d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e.scope - libcontainer container d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e. Mar 7 01:47:40.955937 containerd[1510]: time="2026-03-07T01:47:40.955165742Z" level=info msg="StartContainer for \"d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e\" returns successfully" Mar 7 01:47:41.241446 systemd[1]: run-containerd-runc-k8s.io-d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e-runc.UU1f5I.mount: Deactivated successfully. Mar 7 01:47:41.248135 kubelet[2692]: I0307 01:47:41.247896 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-ngb4s" podStartSLOduration=36.290418203 podStartE2EDuration="49.246896127s" podCreationTimestamp="2026-03-07 01:46:52 +0000 UTC" firstStartedPulling="2026-03-07 01:47:27.499140119 +0000 UTC m=+59.812577434" lastFinishedPulling="2026-03-07 01:47:40.455618042 +0000 UTC m=+72.769055358" observedRunningTime="2026-03-07 01:47:41.242001178 +0000 UTC m=+73.555438506" watchObservedRunningTime="2026-03-07 01:47:41.246896127 +0000 UTC m=+73.560333449" Mar 7 01:47:41.951963 update_engine[1488]: I20260307 01:47:41.951702 1488 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:47:41.951963 update_engine[1488]: I20260307 01:47:41.951869 1488 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:47:41.954616 update_engine[1488]: I20260307 01:47:41.954236 1488 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:47:41.955475 update_engine[1488]: I20260307 01:47:41.955432 1488 omaha_request_params.cc:62] Current group set to lts Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.955714 1488 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.955740 1488 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.955771 1488 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.955860 1488 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.955994 1488 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.956017 1488 omaha_request_action.cc:272] Request: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: Mar 7 01:47:41.956134 update_engine[1488]: I20260307 01:47:41.956032 1488 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:47:42.015714 update_engine[1488]: I20260307 01:47:42.014431 1488 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:47:42.015714 update_engine[1488]: I20260307 01:47:42.014918 1488 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:47:42.023726 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:47:42.026624 update_engine[1488]: E20260307 01:47:42.026417 1488 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:47:42.026624 update_engine[1488]: I20260307 01:47:42.026571 1488 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:47:43.252311 systemd[1]: run-containerd-runc-k8s.io-d0830cb9d23540ef5f00cc9e5ea2810c054b1b9e6406c1141bb041e8061ced4e-runc.YzGRoN.mount: Deactivated successfully. Mar 7 01:47:45.393568 containerd[1510]: time="2026-03-07T01:47:45.391838064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:45.396258 containerd[1510]: time="2026-03-07T01:47:45.396150745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:47:45.399679 containerd[1510]: time="2026-03-07T01:47:45.398825412Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:45.403776 containerd[1510]: time="2026-03-07T01:47:45.403730141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:45.405434 containerd[1510]: time="2026-03-07T01:47:45.405294447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.948104219s" Mar 7 01:47:45.405845 containerd[1510]: time="2026-03-07T01:47:45.405691199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:47:45.411693 containerd[1510]: time="2026-03-07T01:47:45.411602555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:47:45.502368 containerd[1510]: time="2026-03-07T01:47:45.502294757Z" level=info msg="CreateContainer within sandbox \"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:47:45.533233 containerd[1510]: time="2026-03-07T01:47:45.533050477Z" level=info msg="CreateContainer within sandbox \"7c2ca74f82f73f313fc769bad350b1bea1ea20567afbbc97100136e533b066d8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3f7b003bd15201c1599f728c13eb5c52ee40cae85a8964056157ed375f44b74d\"" Mar 7 01:47:45.535694 containerd[1510]: time="2026-03-07T01:47:45.534440605Z" level=info msg="StartContainer for \"3f7b003bd15201c1599f728c13eb5c52ee40cae85a8964056157ed375f44b74d\"" Mar 7 01:47:45.607956 systemd[1]: Started cri-containerd-3f7b003bd15201c1599f728c13eb5c52ee40cae85a8964056157ed375f44b74d.scope - libcontainer container 3f7b003bd15201c1599f728c13eb5c52ee40cae85a8964056157ed375f44b74d. Mar 7 01:47:45.699432 containerd[1510]: time="2026-03-07T01:47:45.697449071Z" level=info msg="StartContainer for \"3f7b003bd15201c1599f728c13eb5c52ee40cae85a8964056157ed375f44b74d\" returns successfully" Mar 7 01:47:46.271892 kubelet[2692]: I0307 01:47:46.271371 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66cf4c95d5-xdglp" podStartSLOduration=34.384333629 podStartE2EDuration="52.271316483s" podCreationTimestamp="2026-03-07 01:46:54 +0000 UTC" firstStartedPulling="2026-03-07 01:47:27.524118852 +0000 UTC m=+59.837556173" lastFinishedPulling="2026-03-07 01:47:45.411101705 +0000 UTC m=+77.724539027" observedRunningTime="2026-03-07 01:47:46.264088272 +0000 UTC m=+78.577525601" watchObservedRunningTime="2026-03-07 01:47:46.271316483 +0000 UTC m=+78.584753806" Mar 7 01:47:47.275069 containerd[1510]: time="2026-03-07T01:47:47.274886661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:47.280382 containerd[1510]: time="2026-03-07T01:47:47.280144256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:47:47.282558 containerd[1510]: time="2026-03-07T01:47:47.282507364Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:47.320280 containerd[1510]: time="2026-03-07T01:47:47.320190662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:47.329679 containerd[1510]: time="2026-03-07T01:47:47.329586771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.917695597s" Mar 7 01:47:47.330404 containerd[1510]: time="2026-03-07T01:47:47.329717110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:47:47.342908 containerd[1510]: time="2026-03-07T01:47:47.342839008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:47:47.370539 containerd[1510]: time="2026-03-07T01:47:47.369851479Z" level=info msg="CreateContainer within sandbox \"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:47:47.416301 containerd[1510]: time="2026-03-07T01:47:47.416038481Z" level=info msg="CreateContainer within sandbox \"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28\"" Mar 7 01:47:47.420151 containerd[1510]: time="2026-03-07T01:47:47.420115135Z" level=info msg="StartContainer for \"371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28\"" Mar 7 01:47:47.521840 systemd[1]: run-containerd-runc-k8s.io-371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28-runc.9O69o1.mount: Deactivated successfully. Mar 7 01:47:47.591033 systemd[1]: Started cri-containerd-371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28.scope - libcontainer container 371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28. Mar 7 01:47:47.598223 systemd[1]: Started sshd@10-10.230.57.158:22-4.153.228.146:40352.service - OpenSSH per-connection server daemon (4.153.228.146:40352). Mar 7 01:47:47.731937 containerd[1510]: time="2026-03-07T01:47:47.731053861Z" level=info msg="StartContainer for \"371d8c00dcd4dcd73cf3d1fde0b4bb1d251d0de49039fa6a4615a240ff9b4f28\" returns successfully" Mar 7 01:47:47.767684 containerd[1510]: time="2026-03-07T01:47:47.765369956Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:47.767684 containerd[1510]: time="2026-03-07T01:47:47.766791700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:47:47.774292 containerd[1510]: time="2026-03-07T01:47:47.774202510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 431.278574ms" Mar 7 01:47:47.774459 containerd[1510]: time="2026-03-07T01:47:47.774309232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:47:47.777497 containerd[1510]: time="2026-03-07T01:47:47.777412549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:47:47.784752 containerd[1510]: time="2026-03-07T01:47:47.784523298Z" level=info msg="CreateContainer within sandbox \"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:47:47.817980 kubelet[2692]: I0307 01:47:47.817341 2692 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:47:47.833620 containerd[1510]: time="2026-03-07T01:47:47.832798179Z" level=info msg="CreateContainer within sandbox \"4966d56d6fffe07f5dd9e12a42145f6ce884739cd3a61c4d3076c3f8a8c0c090\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"697d03ab9dde468a7f8b2ef3c0575a1b236126d4f92f5b770712d0df7f2890ef\"" Mar 7 01:47:47.835558 containerd[1510]: time="2026-03-07T01:47:47.834584999Z" level=info msg="StartContainer for \"697d03ab9dde468a7f8b2ef3c0575a1b236126d4f92f5b770712d0df7f2890ef\"" Mar 7 01:47:47.960900 systemd[1]: Started cri-containerd-697d03ab9dde468a7f8b2ef3c0575a1b236126d4f92f5b770712d0df7f2890ef.scope - libcontainer container 697d03ab9dde468a7f8b2ef3c0575a1b236126d4f92f5b770712d0df7f2890ef. Mar 7 01:47:48.186430 containerd[1510]: time="2026-03-07T01:47:48.186376802Z" level=info msg="StartContainer for \"697d03ab9dde468a7f8b2ef3c0575a1b236126d4f92f5b770712d0df7f2890ef\" returns successfully" Mar 7 01:47:48.334464 sshd[5568]: Accepted publickey for core from 4.153.228.146 port 40352 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:47:48.348429 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:48.378757 systemd-logind[1487]: New session 12 of user core. Mar 7 01:47:48.382990 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:47:49.728970 sshd[5568]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:49.749173 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:47:49.752202 systemd[1]: sshd@10-10.230.57.158:22-4.153.228.146:40352.service: Deactivated successfully. Mar 7 01:47:49.781002 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:47:49.784473 systemd-logind[1487]: Removed session 12. Mar 7 01:47:51.371030 kubelet[2692]: I0307 01:47:51.370720 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-db79945d8-9m2h4" podStartSLOduration=39.336940019 podStartE2EDuration="59.322244331s" podCreationTimestamp="2026-03-07 01:46:52 +0000 UTC" firstStartedPulling="2026-03-07 01:47:27.791393671 +0000 UTC m=+60.104830986" lastFinishedPulling="2026-03-07 01:47:47.776697972 +0000 UTC m=+80.090135298" observedRunningTime="2026-03-07 01:47:48.359423572 +0000 UTC m=+80.672860900" watchObservedRunningTime="2026-03-07 01:47:51.322244331 +0000 UTC m=+83.635681652" Mar 7 01:47:51.795580 containerd[1510]: time="2026-03-07T01:47:51.794083663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:51.797715 containerd[1510]: time="2026-03-07T01:47:51.797482292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:47:51.801350 containerd[1510]: time="2026-03-07T01:47:51.801253957Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:51.809011 containerd[1510]: time="2026-03-07T01:47:51.807580158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:51.809876 containerd[1510]: time="2026-03-07T01:47:51.809823122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.032349452s" Mar 7 01:47:51.809971 containerd[1510]: time="2026-03-07T01:47:51.809904345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:47:51.824165 containerd[1510]: time="2026-03-07T01:47:51.824105006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:47:51.842981 containerd[1510]: time="2026-03-07T01:47:51.842795835Z" level=info msg="CreateContainer within sandbox \"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:47:51.889303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062895290.mount: Deactivated successfully. Mar 7 01:47:51.898730 update_engine[1488]: I20260307 01:47:51.891569 1488 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:47:51.902707 update_engine[1488]: I20260307 01:47:51.901988 1488 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:47:51.902707 update_engine[1488]: I20260307 01:47:51.902606 1488 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:47:51.903587 update_engine[1488]: E20260307 01:47:51.903442 1488 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:47:51.903587 update_engine[1488]: I20260307 01:47:51.903543 1488 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:47:51.919015 containerd[1510]: time="2026-03-07T01:47:51.918929672Z" level=info msg="CreateContainer within sandbox \"d4b661d5e89387b26efc25119c4755aab4c0c562a0007ed9d55a4f377be0eef3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e1a8160b17278d8215b524376de1c29aa13de94caeed60a230c812d66c770878\"" Mar 7 01:47:51.921147 containerd[1510]: time="2026-03-07T01:47:51.921092306Z" level=info msg="StartContainer for \"e1a8160b17278d8215b524376de1c29aa13de94caeed60a230c812d66c770878\"" Mar 7 01:47:52.081946 systemd[1]: Started cri-containerd-e1a8160b17278d8215b524376de1c29aa13de94caeed60a230c812d66c770878.scope - libcontainer container e1a8160b17278d8215b524376de1c29aa13de94caeed60a230c812d66c770878. Mar 7 01:47:52.179299 containerd[1510]: time="2026-03-07T01:47:52.179213039Z" level=info msg="StartContainer for \"e1a8160b17278d8215b524376de1c29aa13de94caeed60a230c812d66c770878\" returns successfully" Mar 7 01:47:52.397805 kubelet[2692]: I0307 01:47:52.397291 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-7n5xl" podStartSLOduration=32.636253684 podStartE2EDuration="58.397224245s" podCreationTimestamp="2026-03-07 01:46:54 +0000 UTC" firstStartedPulling="2026-03-07 01:47:26.062221865 +0000 UTC m=+58.375659181" lastFinishedPulling="2026-03-07 01:47:51.823192422 +0000 UTC m=+84.136629742" observedRunningTime="2026-03-07 01:47:52.395685135 +0000 UTC m=+84.709122473" watchObservedRunningTime="2026-03-07 01:47:52.397224245 +0000 UTC m=+84.710661566" Mar 7 01:47:53.531277 kubelet[2692]: I0307 01:47:53.528529 2692 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:47:53.534766 kubelet[2692]: I0307 01:47:53.533397 2692 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:47:54.867226 systemd[1]: Started sshd@11-10.230.57.158:22-4.153.228.146:56560.service - OpenSSH per-connection server daemon (4.153.228.146:56560). Mar 7 01:47:54.997797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524050609.mount: Deactivated successfully. Mar 7 01:47:55.077122 containerd[1510]: time="2026-03-07T01:47:55.077027688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:55.081966 containerd[1510]: time="2026-03-07T01:47:55.081880698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:47:55.083397 containerd[1510]: time="2026-03-07T01:47:55.083353809Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:55.102043 containerd[1510]: time="2026-03-07T01:47:55.101946190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.277556201s" Mar 7 01:47:55.102639 containerd[1510]: time="2026-03-07T01:47:55.102056913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:47:55.102639 containerd[1510]: time="2026-03-07T01:47:55.102312097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:55.118706 containerd[1510]: time="2026-03-07T01:47:55.118563498Z" level=info msg="CreateContainer within sandbox \"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:47:55.152116 containerd[1510]: time="2026-03-07T01:47:55.151954285Z" level=info msg="CreateContainer within sandbox \"fd24bcb4ca186da5c9b006d4e58fb0dec2ee1eb53df8b66e56b859aca5b3bb17\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e5ac937bc9d1a9ec7f46592a6aa3305b96b6cb74cacdf1da926214ef4e176f54\"" Mar 7 01:47:55.153530 containerd[1510]: time="2026-03-07T01:47:55.153477038Z" level=info msg="StartContainer for \"e5ac937bc9d1a9ec7f46592a6aa3305b96b6cb74cacdf1da926214ef4e176f54\"" Mar 7 01:47:55.296882 systemd[1]: Started cri-containerd-e5ac937bc9d1a9ec7f46592a6aa3305b96b6cb74cacdf1da926214ef4e176f54.scope - libcontainer container e5ac937bc9d1a9ec7f46592a6aa3305b96b6cb74cacdf1da926214ef4e176f54. Mar 7 01:47:55.516112 containerd[1510]: time="2026-03-07T01:47:55.515872999Z" level=info msg="StartContainer for \"e5ac937bc9d1a9ec7f46592a6aa3305b96b6cb74cacdf1da926214ef4e176f54\" returns successfully" Mar 7 01:47:55.649321 sshd[5732]: Accepted publickey for core from 4.153.228.146 port 56560 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:47:55.659821 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:55.677723 systemd-logind[1487]: New session 13 of user core. Mar 7 01:47:55.683940 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:47:56.495735 kubelet[2692]: I0307 01:47:56.495027 2692 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-85c7754466-f58ng" podStartSLOduration=4.15214593 podStartE2EDuration="31.494956089s" podCreationTimestamp="2026-03-07 01:47:25 +0000 UTC" firstStartedPulling="2026-03-07 01:47:27.763013426 +0000 UTC m=+60.076450742" lastFinishedPulling="2026-03-07 01:47:55.105823587 +0000 UTC m=+87.419260901" observedRunningTime="2026-03-07 01:47:56.483441669 +0000 UTC m=+88.796878998" watchObservedRunningTime="2026-03-07 01:47:56.494956089 +0000 UTC m=+88.808393407" Mar 7 01:47:56.934309 sshd[5732]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:56.941952 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:47:56.942526 systemd[1]: sshd@11-10.230.57.158:22-4.153.228.146:56560.service: Deactivated successfully. Mar 7 01:47:56.947399 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:47:56.951756 systemd-logind[1487]: Removed session 13. Mar 7 01:48:01.873295 update_engine[1488]: I20260307 01:48:01.872915 1488 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:48:01.874956 update_engine[1488]: I20260307 01:48:01.874076 1488 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:48:01.874956 update_engine[1488]: I20260307 01:48:01.874902 1488 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:48:01.875472 update_engine[1488]: E20260307 01:48:01.875423 1488 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:48:01.875574 update_engine[1488]: I20260307 01:48:01.875535 1488 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:48:02.045079 systemd[1]: Started sshd@12-10.230.57.158:22-4.153.228.146:52530.service - OpenSSH per-connection server daemon (4.153.228.146:52530). Mar 7 01:48:02.673721 sshd[5793]: Accepted publickey for core from 4.153.228.146 port 52530 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:02.677332 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:02.689099 systemd-logind[1487]: New session 14 of user core. Mar 7 01:48:02.697435 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:48:03.380817 sshd[5793]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:03.390303 systemd[1]: sshd@12-10.230.57.158:22-4.153.228.146:52530.service: Deactivated successfully. Mar 7 01:48:03.394421 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:48:03.396692 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:48:03.398618 systemd-logind[1487]: Removed session 14. Mar 7 01:48:08.489243 systemd[1]: Started sshd@13-10.230.57.158:22-4.153.228.146:52542.service - OpenSSH per-connection server daemon (4.153.228.146:52542). Mar 7 01:48:09.195846 sshd[5824]: Accepted publickey for core from 4.153.228.146 port 52542 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:09.204979 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:09.215097 systemd-logind[1487]: New session 15 of user core. Mar 7 01:48:09.222953 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:48:10.004232 sshd[5824]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:10.010924 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:48:10.014279 systemd[1]: sshd@13-10.230.57.158:22-4.153.228.146:52542.service: Deactivated successfully. Mar 7 01:48:10.018768 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:48:10.020365 systemd-logind[1487]: Removed session 15. Mar 7 01:48:10.106801 systemd[1]: Started sshd@14-10.230.57.158:22-4.153.228.146:43638.service - OpenSSH per-connection server daemon (4.153.228.146:43638). Mar 7 01:48:10.702159 sshd[5837]: Accepted publickey for core from 4.153.228.146 port 43638 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:10.704920 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:10.715433 systemd-logind[1487]: New session 16 of user core. Mar 7 01:48:10.720876 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:48:11.425356 sshd[5837]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:11.432482 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:48:11.433942 systemd[1]: sshd@14-10.230.57.158:22-4.153.228.146:43638.service: Deactivated successfully. Mar 7 01:48:11.437520 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:48:11.441641 systemd-logind[1487]: Removed session 16. Mar 7 01:48:11.531521 systemd[1]: Started sshd@15-10.230.57.158:22-4.153.228.146:43648.service - OpenSSH per-connection server daemon (4.153.228.146:43648). Mar 7 01:48:11.868245 update_engine[1488]: I20260307 01:48:11.868093 1488 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:48:11.870034 update_engine[1488]: I20260307 01:48:11.868909 1488 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:48:11.870034 update_engine[1488]: I20260307 01:48:11.869445 1488 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:48:11.870034 update_engine[1488]: E20260307 01:48:11.869906 1488 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:48:11.870243 update_engine[1488]: I20260307 01:48:11.870042 1488 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:48:11.875551 update_engine[1488]: I20260307 01:48:11.875433 1488 omaha_request_action.cc:617] Omaha request response: Mar 7 01:48:11.876104 update_engine[1488]: E20260307 01:48:11.876035 1488 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955358 1488 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955487 1488 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955505 1488 update_attempter.cc:306] Processing Done. Mar 7 01:48:11.955603 update_engine[1488]: E20260307 01:48:11.955571 1488 update_attempter.cc:619] Update failed. Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955590 1488 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955602 1488 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:48:11.955603 update_engine[1488]: I20260307 01:48:11.955621 1488 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:48:11.957351 update_engine[1488]: I20260307 01:48:11.955833 1488 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:48:11.957351 update_engine[1488]: I20260307 01:48:11.955922 1488 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:48:11.957351 update_engine[1488]: I20260307 01:48:11.955960 1488 omaha_request_action.cc:272] Request: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: Mar 7 01:48:11.957351 update_engine[1488]: I20260307 01:48:11.955992 1488 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:48:11.957351 update_engine[1488]: I20260307 01:48:11.956454 1488 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:48:11.960391 update_engine[1488]: I20260307 01:48:11.957441 1488 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:48:11.960391 update_engine[1488]: E20260307 01:48:11.958180 1488 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:48:11.960391 update_engine[1488]: I20260307 01:48:11.958246 1488 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:48:11.960391 update_engine[1488]: I20260307 01:48:11.958263 1488 omaha_request_action.cc:617] Omaha request response: Mar 7 01:48:11.960391 update_engine[1488]: I20260307 01:48:11.958278 1488 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:48:11.960637 update_engine[1488]: I20260307 01:48:11.960453 1488 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:48:11.960637 update_engine[1488]: I20260307 01:48:11.960480 1488 update_attempter.cc:306] Processing Done. Mar 7 01:48:11.960637 update_engine[1488]: I20260307 01:48:11.960496 1488 update_attempter.cc:310] Error event sent. Mar 7 01:48:11.960637 update_engine[1488]: I20260307 01:48:11.960534 1488 update_check_scheduler.cc:74] Next update check in 49m41s Mar 7 01:48:12.004783 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:48:12.004783 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:48:12.110790 sshd[5848]: Accepted publickey for core from 4.153.228.146 port 43648 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:12.114447 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:12.123785 systemd-logind[1487]: New session 17 of user core. Mar 7 01:48:12.132396 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:48:12.972284 sshd[5848]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:12.978192 systemd[1]: sshd@15-10.230.57.158:22-4.153.228.146:43648.service: Deactivated successfully. Mar 7 01:48:12.981519 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:48:12.983210 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:48:12.984974 systemd-logind[1487]: Removed session 17. Mar 7 01:48:18.085140 systemd[1]: Started sshd@16-10.230.57.158:22-4.153.228.146:43652.service - OpenSSH per-connection server daemon (4.153.228.146:43652). Mar 7 01:48:18.777770 sshd[5927]: Accepted publickey for core from 4.153.228.146 port 43652 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:18.782600 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:18.797203 systemd-logind[1487]: New session 18 of user core. Mar 7 01:48:18.800900 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:48:19.717743 sshd[5927]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:19.725034 systemd[1]: sshd@16-10.230.57.158:22-4.153.228.146:43652.service: Deactivated successfully. Mar 7 01:48:19.730410 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:48:19.733373 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:48:19.735381 systemd-logind[1487]: Removed session 18. Mar 7 01:48:19.810887 systemd[1]: Started sshd@17-10.230.57.158:22-4.153.228.146:59412.service - OpenSSH per-connection server daemon (4.153.228.146:59412). Mar 7 01:48:20.399110 sshd[5940]: Accepted publickey for core from 4.153.228.146 port 59412 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:20.403498 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:20.411833 systemd-logind[1487]: New session 19 of user core. Mar 7 01:48:20.422948 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:48:21.339011 sshd[5940]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:21.354231 systemd[1]: sshd@17-10.230.57.158:22-4.153.228.146:59412.service: Deactivated successfully. Mar 7 01:48:21.358569 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:48:21.359914 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:48:21.361962 systemd-logind[1487]: Removed session 19. Mar 7 01:48:21.454814 systemd[1]: Started sshd@18-10.230.57.158:22-4.153.228.146:59420.service - OpenSSH per-connection server daemon (4.153.228.146:59420). Mar 7 01:48:22.063619 sshd[5951]: Accepted publickey for core from 4.153.228.146 port 59420 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:22.065451 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:22.072373 systemd-logind[1487]: New session 20 of user core. Mar 7 01:48:22.079923 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:48:23.502609 sshd[5951]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:23.513393 systemd[1]: sshd@18-10.230.57.158:22-4.153.228.146:59420.service: Deactivated successfully. Mar 7 01:48:23.518788 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:48:23.520222 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:48:23.522413 systemd-logind[1487]: Removed session 20. Mar 7 01:48:23.611856 systemd[1]: Started sshd@19-10.230.57.158:22-4.153.228.146:59436.service - OpenSSH per-connection server daemon (4.153.228.146:59436). Mar 7 01:48:24.250402 sshd[5974]: Accepted publickey for core from 4.153.228.146 port 59436 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:24.259268 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:24.267182 systemd-logind[1487]: New session 21 of user core. Mar 7 01:48:24.277890 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:48:25.449412 sshd[5974]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:25.459504 systemd[1]: sshd@19-10.230.57.158:22-4.153.228.146:59436.service: Deactivated successfully. Mar 7 01:48:25.464425 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:48:25.465790 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:48:25.468553 systemd-logind[1487]: Removed session 21. Mar 7 01:48:25.550896 systemd[1]: Started sshd@20-10.230.57.158:22-4.153.228.146:59438.service - OpenSSH per-connection server daemon (4.153.228.146:59438). Mar 7 01:48:26.152101 sshd[6010]: Accepted publickey for core from 4.153.228.146 port 59438 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:26.154837 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:26.162549 systemd-logind[1487]: New session 22 of user core. Mar 7 01:48:26.172164 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:48:26.666276 sshd[6010]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:26.675598 systemd[1]: sshd@20-10.230.57.158:22-4.153.228.146:59438.service: Deactivated successfully. Mar 7 01:48:26.680752 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:48:26.683398 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:48:26.685907 systemd-logind[1487]: Removed session 22. Mar 7 01:48:31.774129 systemd[1]: Started sshd@21-10.230.57.158:22-4.153.228.146:37296.service - OpenSSH per-connection server daemon (4.153.228.146:37296). Mar 7 01:48:32.370335 sshd[6027]: Accepted publickey for core from 4.153.228.146 port 37296 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:32.373906 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:32.384763 systemd-logind[1487]: New session 23 of user core. Mar 7 01:48:32.389910 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:48:32.919873 sshd[6027]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:32.928122 systemd[1]: sshd@21-10.230.57.158:22-4.153.228.146:37296.service: Deactivated successfully. Mar 7 01:48:32.933255 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:48:32.934358 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:48:32.936091 systemd-logind[1487]: Removed session 23. Mar 7 01:48:38.033075 systemd[1]: Started sshd@22-10.230.57.158:22-4.153.228.146:37302.service - OpenSSH per-connection server daemon (4.153.228.146:37302). Mar 7 01:48:38.680294 sshd[6042]: Accepted publickey for core from 4.153.228.146 port 37302 ssh2: RSA SHA256:w+PAFQpRTfR6j7PgyvIPDhtwv6iHTxc5+0N6WatIk5o Mar 7 01:48:38.686909 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:48:38.696684 systemd-logind[1487]: New session 24 of user core. Mar 7 01:48:38.702900 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:48:39.401180 sshd[6042]: pam_unix(sshd:session): session closed for user core Mar 7 01:48:39.414252 systemd[1]: sshd@22-10.230.57.158:22-4.153.228.146:37302.service: Deactivated successfully. Mar 7 01:48:39.418455 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:48:39.420944 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:48:39.423248 systemd-logind[1487]: Removed session 24.