Apr 16 23:53:27.900627 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 16 23:53:27.900643 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:53:27.900650 kernel: BIOS-provided physical RAM map: Apr 16 23:53:27.900655 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:53:27.900662 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 16 23:53:27.900666 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:53:27.900672 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:53:27.900676 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:53:27.900681 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:53:27.900686 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:53:27.900691 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:53:27.900695 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:53:27.900700 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:53:27.900706 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:53:27.900712 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:53:27.900717 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:53:27.900722 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:53:27.900726 kernel: NX (Execute Disable) protection: active Apr 16 23:53:27.900733 kernel: APIC: Static calls initialized Apr 16 23:53:27.900738 kernel: e820: update [mem 0x7dfab018-0x7dfb4a57] usable ==> usable Apr 16 23:53:27.900743 kernel: e820: update [mem 0x7df6f018-0x7dfaa657] usable ==> usable Apr 16 23:53:27.900748 kernel: e820: update [mem 0x7dc01018-0x7dc3c657] usable ==> usable Apr 16 23:53:27.900753 kernel: extended physical RAM map: Apr 16 23:53:27.900758 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:53:27.900763 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007dc01017] usable Apr 16 23:53:27.900767 kernel: reserve setup_data: [mem 0x000000007dc01018-0x000000007dc3c657] usable Apr 16 23:53:27.900772 kernel: reserve setup_data: [mem 0x000000007dc3c658-0x000000007df6f017] usable Apr 16 23:53:27.900777 kernel: reserve setup_data: [mem 0x000000007df6f018-0x000000007dfaa657] usable Apr 16 23:53:27.900794 kernel: reserve setup_data: [mem 0x000000007dfaa658-0x000000007dfab017] usable Apr 16 23:53:27.900801 kernel: reserve setup_data: [mem 0x000000007dfab018-0x000000007dfb4a57] usable Apr 16 23:53:27.900806 kernel: reserve setup_data: [mem 0x000000007dfb4a58-0x000000007ed3efff] usable Apr 16 23:53:27.900811 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:53:27.900815 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:53:27.900820 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:53:27.900825 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:53:27.900830 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:53:27.900835 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:53:27.900840 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:53:27.900845 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:53:27.900850 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:53:27.900859 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:53:27.900864 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:53:27.900869 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:53:27.900874 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 23:53:27.900880 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 RNG=0x7fb73018 Apr 16 23:53:27.900887 kernel: random: crng init done Apr 16 23:53:27.900892 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 16 23:53:27.900897 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 16 23:53:27.900902 kernel: secureboot: Secure boot disabled Apr 16 23:53:27.900907 kernel: SMBIOS 3.0.0 present. Apr 16 23:53:27.900912 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 16 23:53:27.900917 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:53:27.900922 kernel: Hypervisor detected: KVM Apr 16 23:53:27.900927 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:53:27.900932 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 23:53:27.900937 kernel: kvm-clock: using sched offset of 13471601461 cycles Apr 16 23:53:27.900944 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 23:53:27.900950 kernel: tsc: Detected 2400.000 MHz processor Apr 16 23:53:27.900955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 23:53:27.900961 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 23:53:27.900966 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 16 23:53:27.900971 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 23:53:27.900976 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 23:53:27.900982 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:53:27.900987 kernel: Using GB pages for direct mapping Apr 16 23:53:27.900994 kernel: ACPI: Early table checksum verification disabled Apr 16 23:53:27.900999 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 16 23:53:27.901004 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 23:53:27.901010 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901015 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901020 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 16 23:53:27.901025 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901030 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901035 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901042 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:53:27.901048 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 23:53:27.901053 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 16 23:53:27.901058 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 16 23:53:27.901063 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 16 23:53:27.901068 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 16 23:53:27.901073 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 16 23:53:27.901078 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 16 23:53:27.901083 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 16 23:53:27.901091 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 16 23:53:27.901096 kernel: No NUMA configuration found Apr 16 23:53:27.901101 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 16 23:53:27.901106 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Apr 16 23:53:27.901111 kernel: Zone ranges: Apr 16 23:53:27.901117 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 23:53:27.901122 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 16 23:53:27.901127 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:53:27.901132 kernel: Device empty Apr 16 23:53:27.901139 kernel: Movable zone start for each node Apr 16 23:53:27.901144 kernel: Early memory node ranges Apr 16 23:53:27.901149 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 23:53:27.901155 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 16 23:53:27.901160 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 16 23:53:27.901165 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 16 23:53:27.901170 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:53:27.901175 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 16 23:53:27.901180 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 23:53:27.901185 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 23:53:27.901193 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 16 23:53:27.901198 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 16 23:53:27.901203 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 16 23:53:27.901208 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 23:53:27.901213 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 23:53:27.901218 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 23:53:27.901224 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 23:53:27.901229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 23:53:27.901234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 23:53:27.901241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 23:53:27.901246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 23:53:27.901251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 23:53:27.901257 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 23:53:27.901262 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 23:53:27.901267 kernel: CPU topo: Max. logical packages: 1 Apr 16 23:53:27.901272 kernel: CPU topo: Max. logical dies: 1 Apr 16 23:53:27.901286 kernel: CPU topo: Max. dies per package: 1 Apr 16 23:53:27.901291 kernel: CPU topo: Max. threads per core: 1 Apr 16 23:53:27.901296 kernel: CPU topo: Num. cores per package: 2 Apr 16 23:53:27.901302 kernel: CPU topo: Num. threads per package: 2 Apr 16 23:53:27.901314 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 16 23:53:27.901322 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 23:53:27.901327 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 16 23:53:27.901332 kernel: Booting paravirtualized kernel on KVM Apr 16 23:53:27.901338 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 23:53:27.901344 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 16 23:53:27.901351 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 16 23:53:27.901356 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 16 23:53:27.901362 kernel: pcpu-alloc: [0] 0 1 Apr 16 23:53:27.901367 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 16 23:53:27.901373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:53:27.901379 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:53:27.901384 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:53:27.901389 kernel: Fallback order for Node 0: 0 Apr 16 23:53:27.901397 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Apr 16 23:53:27.901402 kernel: Policy zone: Normal Apr 16 23:53:27.901408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:53:27.901413 kernel: software IO TLB: area num 2. Apr 16 23:53:27.901418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:53:27.901424 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 23:53:27.901429 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 23:53:27.901435 kernel: Dynamic Preempt: voluntary Apr 16 23:53:27.901440 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:53:27.901448 kernel: rcu: RCU event tracing is enabled. Apr 16 23:53:27.901454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:53:27.901459 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:53:27.901465 kernel: Rude variant of Tasks RCU enabled. Apr 16 23:53:27.901470 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:53:27.901476 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:53:27.901481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:53:27.901486 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:53:27.901492 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:53:27.901499 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:53:27.901505 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 16 23:53:27.901510 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:53:27.901515 kernel: Console: colour dummy device 80x25 Apr 16 23:53:27.901521 kernel: printk: legacy console [tty0] enabled Apr 16 23:53:27.901526 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:53:27.901531 kernel: ACPI: Core revision 20240827 Apr 16 23:53:27.901537 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 23:53:27.901542 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 23:53:27.901550 kernel: x2apic enabled Apr 16 23:53:27.901555 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 23:53:27.901560 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 23:53:27.901566 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Apr 16 23:53:27.901571 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Apr 16 23:53:27.901577 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 23:53:27.901582 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 16 23:53:27.901588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 16 23:53:27.901593 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 23:53:27.901601 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 16 23:53:27.901606 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 16 23:53:27.901611 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 16 23:53:27.901617 kernel: active return thunk: srso_alias_return_thunk Apr 16 23:53:27.901622 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 16 23:53:27.901628 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 16 23:53:27.901633 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 23:53:27.901639 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 23:53:27.901644 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 23:53:27.901652 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 23:53:27.901657 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 23:53:27.901662 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 23:53:27.901668 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 23:53:27.901673 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 16 23:53:27.901679 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 23:53:27.901684 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 23:53:27.901690 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 23:53:27.901695 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 23:53:27.901702 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 16 23:53:27.901708 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 16 23:53:27.901713 kernel: Freeing SMP alternatives memory: 32K Apr 16 23:53:27.901719 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:53:27.901724 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:53:27.901729 kernel: landlock: Up and running. Apr 16 23:53:27.901735 kernel: SELinux: Initializing. Apr 16 23:53:27.901740 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:53:27.901746 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:53:27.901753 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 16 23:53:27.901758 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 16 23:53:27.901764 kernel: ... version: 0 Apr 16 23:53:27.901769 kernel: ... bit width: 48 Apr 16 23:53:27.901775 kernel: ... generic registers: 6 Apr 16 23:53:27.901780 kernel: ... value mask: 0000ffffffffffff Apr 16 23:53:27.902361 kernel: ... max period: 00007fffffffffff Apr 16 23:53:27.902367 kernel: ... fixed-purpose events: 0 Apr 16 23:53:27.902373 kernel: ... event mask: 000000000000003f Apr 16 23:53:27.902381 kernel: signal: max sigframe size: 3376 Apr 16 23:53:27.902386 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:53:27.902392 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:53:27.902397 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:53:27.902403 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:53:27.902408 kernel: smpboot: x86: Booting SMP configuration: Apr 16 23:53:27.902413 kernel: .... node #0, CPUs: #1 Apr 16 23:53:27.902419 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:53:27.902424 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Apr 16 23:53:27.902432 kernel: Memory: 3813628K/4091168K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 271900K reserved, 0K cma-reserved) Apr 16 23:53:27.902438 kernel: devtmpfs: initialized Apr 16 23:53:27.902443 kernel: x86/mm: Memory block size: 128MB Apr 16 23:53:27.902449 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 16 23:53:27.902454 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:53:27.902460 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:53:27.902465 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:53:27.902470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:53:27.902476 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:53:27.902483 kernel: audit: type=2000 audit(1776383605.199:1): state=initialized audit_enabled=0 res=1 Apr 16 23:53:27.902488 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:53:27.902494 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 23:53:27.902499 kernel: cpuidle: using governor menu Apr 16 23:53:27.902505 kernel: efi: Freeing EFI boot services memory: 34884K Apr 16 23:53:27.902510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:53:27.902515 kernel: dca service started, version 1.12.1 Apr 16 23:53:27.902521 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 16 23:53:27.902526 kernel: PCI: Using configuration type 1 for base access Apr 16 23:53:27.902534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 23:53:27.902539 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:53:27.902545 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:53:27.902550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:53:27.902555 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:53:27.902561 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:53:27.902567 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:53:27.902572 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:53:27.902577 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:53:27.902585 kernel: ACPI: Interpreter enabled Apr 16 23:53:27.902590 kernel: ACPI: PM: (supports S0 S5) Apr 16 23:53:27.902596 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 23:53:27.902601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 23:53:27.902606 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 23:53:27.902612 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 23:53:27.902617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:53:27.902767 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:53:27.902896 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 23:53:27.902995 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 23:53:27.903001 kernel: PCI host bridge to bus 0000:00 Apr 16 23:53:27.903101 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 23:53:27.903190 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 23:53:27.903280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 23:53:27.903377 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 16 23:53:27.903467 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 16 23:53:27.903554 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:53:27.903658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:53:27.903813 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:53:27.903927 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Apr 16 23:53:27.904025 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Apr 16 23:53:27.904124 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Apr 16 23:53:27.904220 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Apr 16 23:53:27.904323 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 16 23:53:27.904419 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 23:53:27.904521 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.904617 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Apr 16 23:53:27.904712 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:53:27.904821 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:53:27.904917 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:53:27.905021 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.905116 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Apr 16 23:53:27.905210 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:53:27.905305 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:53:27.905413 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.905511 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Apr 16 23:53:27.905605 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:53:27.905700 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:53:27.905815 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:53:27.905919 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.906015 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Apr 16 23:53:27.906110 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:53:27.906209 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:53:27.906317 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.906413 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Apr 16 23:53:27.906509 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:53:27.906604 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:53:27.906698 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:53:27.906815 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.906914 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Apr 16 23:53:27.907009 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:53:27.907104 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:53:27.907200 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:53:27.907301 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.907404 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Apr 16 23:53:27.907500 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:53:27.907597 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:53:27.907692 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:53:27.907818 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.907916 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Apr 16 23:53:27.908011 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:53:27.908107 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:53:27.908204 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:53:27.908305 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:53:27.908408 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Apr 16 23:53:27.908515 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:53:27.908623 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:53:27.908735 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:53:27.908861 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 23:53:27.908962 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 23:53:27.909064 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 23:53:27.909159 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Apr 16 23:53:27.909253 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Apr 16 23:53:27.909360 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 23:53:27.909456 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Apr 16 23:53:27.909562 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:53:27.909664 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Apr 16 23:53:27.909819 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Apr 16 23:53:27.909921 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:53:27.910047 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:53:27.910154 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 23:53:27.910253 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Apr 16 23:53:27.910361 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:53:27.910469 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 16 23:53:27.910569 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Apr 16 23:53:27.910668 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Apr 16 23:53:27.910763 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:53:27.910884 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:53:27.910987 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Apr 16 23:53:27.911086 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:53:27.911192 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:53:27.911291 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Apr 16 23:53:27.911397 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Apr 16 23:53:27.911492 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:53:27.911598 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 16 23:53:27.911697 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Apr 16 23:53:27.911811 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Apr 16 23:53:27.911907 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:53:27.911914 kernel: acpiphp: Slot [0] registered Apr 16 23:53:27.912021 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:53:27.912121 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Apr 16 23:53:27.912221 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Apr 16 23:53:27.912327 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:53:27.912426 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:53:27.912433 kernel: acpiphp: Slot [0-2] registered Apr 16 23:53:27.912526 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:53:27.912534 kernel: acpiphp: Slot [0-3] registered Apr 16 23:53:27.912627 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:53:27.912653 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 23:53:27.912659 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 23:53:27.912665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 23:53:27.912673 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 23:53:27.912678 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 23:53:27.912684 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 23:53:27.912689 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 23:53:27.912695 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 23:53:27.912701 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 23:53:27.912706 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 23:53:27.912712 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 23:53:27.912717 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 23:53:27.912726 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 23:53:27.912731 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 23:53:27.912739 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 23:53:27.912745 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 23:53:27.912750 kernel: iommu: Default domain type: Translated Apr 16 23:53:27.912756 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 23:53:27.912764 kernel: efivars: Registered efivars operations Apr 16 23:53:27.912769 kernel: PCI: Using ACPI for IRQ routing Apr 16 23:53:27.912775 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 23:53:27.912781 kernel: e820: reserve RAM buffer [mem 0x7dc01018-0x7fffffff] Apr 16 23:53:27.912797 kernel: e820: reserve RAM buffer [mem 0x7df6f018-0x7fffffff] Apr 16 23:53:27.912802 kernel: e820: reserve RAM buffer [mem 0x7dfab018-0x7fffffff] Apr 16 23:53:27.912808 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 16 23:53:27.912813 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 16 23:53:27.912819 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 16 23:53:27.912827 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 16 23:53:27.912923 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 23:53:27.913018 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 23:53:27.913113 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 23:53:27.913120 kernel: vgaarb: loaded Apr 16 23:53:27.913125 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 23:53:27.913131 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 23:53:27.913137 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 23:53:27.913145 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:53:27.913150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:53:27.913156 kernel: pnp: PnP ACPI init Apr 16 23:53:27.913258 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 16 23:53:27.913266 kernel: pnp: PnP ACPI: found 5 devices Apr 16 23:53:27.913275 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 23:53:27.913281 kernel: NET: Registered PF_INET protocol family Apr 16 23:53:27.913286 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:53:27.913295 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:53:27.913300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:53:27.913313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:53:27.913319 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:53:27.913325 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:53:27.913330 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:53:27.913336 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:53:27.913342 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:53:27.913347 kernel: NET: Registered PF_XDP protocol family Apr 16 23:53:27.913464 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:53:27.913567 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:53:27.913663 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 23:53:27.913759 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 23:53:27.913867 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 23:53:27.913964 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:53:27.914059 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:53:27.914367 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:53:27.914715 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Apr 16 23:53:27.914868 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:53:27.915476 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:53:27.915609 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:53:27.915713 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:53:27.915872 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:53:27.915974 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:53:27.916072 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:53:27.916168 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:53:27.916272 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:53:27.916378 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:53:27.916475 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:53:27.916571 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:53:27.916666 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:53:27.916764 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:53:27.921720 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:53:27.921851 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:53:27.921957 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Apr 16 23:53:27.922059 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:53:27.922155 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 16 23:53:27.922252 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:53:27.922366 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:53:27.922471 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:53:27.922568 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 16 23:53:27.922664 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:53:27.922759 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:53:27.922868 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:53:27.922964 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 16 23:53:27.923059 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:53:27.923153 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:53:27.923246 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 23:53:27.923341 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 23:53:27.923434 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 23:53:27.923524 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 16 23:53:27.923614 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 16 23:53:27.923717 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:53:27.924910 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 16 23:53:27.925030 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:53:27.925140 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 16 23:53:27.925246 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 16 23:53:27.925365 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:53:27.925475 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:53:27.925582 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 16 23:53:27.925676 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:53:27.925777 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 16 23:53:27.925937 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:53:27.926041 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 16 23:53:27.926138 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 16 23:53:27.926235 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:53:27.926362 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 16 23:53:27.926458 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 16 23:53:27.926556 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:53:27.926658 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 16 23:53:27.926753 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 16 23:53:27.926922 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:53:27.926932 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 23:53:27.926939 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:53:27.926945 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 16 23:53:27.926951 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 16 23:53:27.926960 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Apr 16 23:53:27.926966 kernel: Initialise system trusted keyrings Apr 16 23:53:27.926973 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:53:27.926979 kernel: Key type asymmetric registered Apr 16 23:53:27.926985 kernel: Asymmetric key parser 'x509' registered Apr 16 23:53:27.926991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 23:53:27.926996 kernel: io scheduler mq-deadline registered Apr 16 23:53:27.927002 kernel: io scheduler kyber registered Apr 16 23:53:27.927008 kernel: io scheduler bfq registered Apr 16 23:53:27.927115 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 16 23:53:27.927215 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 16 23:53:27.927332 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 16 23:53:27.927430 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 16 23:53:27.927528 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 16 23:53:27.927624 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 16 23:53:27.927721 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 16 23:53:27.927832 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 16 23:53:27.927935 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 16 23:53:27.928031 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 16 23:53:27.928127 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 16 23:53:27.928223 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 16 23:53:27.928340 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 16 23:53:27.928439 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 16 23:53:27.928537 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 16 23:53:27.928636 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 16 23:53:27.928644 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 23:53:27.928743 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 16 23:53:27.928866 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 16 23:53:27.928874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 23:53:27.928880 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 16 23:53:27.928886 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:53:27.928895 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 23:53:27.928901 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 23:53:27.928907 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 23:53:27.928913 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 23:53:27.929016 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 16 23:53:27.929024 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 23:53:27.929115 kernel: rtc_cmos 00:03: registered as rtc0 Apr 16 23:53:27.929207 kernel: rtc_cmos 00:03: setting system clock to 2026-04-16T23:53:27 UTC (1776383607) Apr 16 23:53:27.929316 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 23:53:27.929324 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Apr 16 23:53:27.929331 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 16 23:53:27.929337 kernel: efifb: probing for efifb Apr 16 23:53:27.929343 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Apr 16 23:53:27.929349 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 16 23:53:27.929355 kernel: efifb: scrolling: redraw Apr 16 23:53:27.929362 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 16 23:53:27.929367 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:53:27.929376 kernel: fb0: EFI VGA frame buffer device Apr 16 23:53:27.929382 kernel: pstore: Using crash dump compression: deflate Apr 16 23:53:27.929388 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 23:53:27.929394 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:53:27.929400 kernel: Segment Routing with IPv6 Apr 16 23:53:27.929406 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:53:27.929412 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:53:27.929418 kernel: Key type dns_resolver registered Apr 16 23:53:27.929424 kernel: IPI shorthand broadcast: enabled Apr 16 23:53:27.929432 kernel: sched_clock: Marking stable (2738013809, 274199124)->(3160195553, -147982620) Apr 16 23:53:27.929438 kernel: registered taskstats version 1 Apr 16 23:53:27.929444 kernel: Loading compiled-in X.509 certificates Apr 16 23:53:27.929450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 16 23:53:27.929456 kernel: Demotion targets for Node 0: null Apr 16 23:53:27.929462 kernel: Key type .fscrypt registered Apr 16 23:53:27.929468 kernel: Key type fscrypt-provisioning registered Apr 16 23:53:27.929474 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:53:27.929482 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:53:27.929488 kernel: ima: No architecture policies found Apr 16 23:53:27.929493 kernel: clk: Disabling unused clocks Apr 16 23:53:27.929499 kernel: Warning: unable to open an initial console. Apr 16 23:53:27.929505 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 16 23:53:27.929511 kernel: Write protecting the kernel read-only data: 40960k Apr 16 23:53:27.929517 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 23:53:27.929523 kernel: Run /init as init process Apr 16 23:53:27.929528 kernel: with arguments: Apr 16 23:53:27.929537 kernel: /init Apr 16 23:53:27.929543 kernel: with environment: Apr 16 23:53:27.929548 kernel: HOME=/ Apr 16 23:53:27.929554 kernel: TERM=linux Apr 16 23:53:27.929561 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:53:27.929570 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:53:27.929577 systemd[1]: Detected virtualization kvm. Apr 16 23:53:27.929583 systemd[1]: Detected architecture x86-64. Apr 16 23:53:27.929592 systemd[1]: Running in initrd. Apr 16 23:53:27.929598 systemd[1]: No hostname configured, using default hostname. Apr 16 23:53:27.929604 systemd[1]: Hostname set to . Apr 16 23:53:27.929610 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:53:27.929616 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:53:27.929622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:53:27.929628 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:53:27.929635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:53:27.929643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:53:27.929650 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:53:27.929656 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:53:27.929664 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:53:27.929670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:53:27.929676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:53:27.929682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:53:27.929691 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:53:27.929697 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:53:27.929704 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:53:27.929710 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:53:27.929716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:53:27.929722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:53:27.929728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:53:27.929735 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:53:27.929743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:53:27.929750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:53:27.929756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:53:27.929762 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:53:27.929768 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:53:27.929774 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:53:27.929780 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:53:27.929798 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:53:27.929805 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:53:27.929813 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:53:27.929820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:53:27.929826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:27.929832 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:53:27.929839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:53:27.929847 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:53:27.929853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:53:27.929882 systemd-journald[197]: Collecting audit messages is disabled. Apr 16 23:53:27.929901 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:53:27.929907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:27.929914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:53:27.929920 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:53:27.929926 kernel: Bridge firewalling registered Apr 16 23:53:27.929932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:53:27.929940 systemd-journald[197]: Journal started Apr 16 23:53:27.929956 systemd-journald[197]: Runtime Journal (/run/log/journal/40935e4284824813967529263e0333a7) is 8M, max 76.1M, 68.1M free. Apr 16 23:53:27.885711 systemd-modules-load[198]: Inserted module 'overlay' Apr 16 23:53:27.934824 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:53:27.924146 systemd-modules-load[198]: Inserted module 'br_netfilter' Apr 16 23:53:27.941135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:53:27.945890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:53:27.948000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:53:27.948634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:53:27.960030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:53:27.962891 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:53:27.969002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:53:27.970989 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:53:27.976714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:53:27.978286 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:53:27.978889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:53:28.013715 systemd-resolved[247]: Positive Trust Anchors: Apr 16 23:53:28.014335 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:53:28.014728 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:53:28.018375 systemd-resolved[247]: Defaulting to hostname 'linux'. Apr 16 23:53:28.019290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:53:28.019730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:53:28.055832 kernel: SCSI subsystem initialized Apr 16 23:53:28.062813 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:53:28.071821 kernel: iscsi: registered transport (tcp) Apr 16 23:53:28.090134 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:53:28.090182 kernel: QLogic iSCSI HBA Driver Apr 16 23:53:28.107080 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:53:28.121721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:53:28.123902 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:53:28.159938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:53:28.161409 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:53:28.202813 kernel: raid6: avx512x4 gen() 49335 MB/s Apr 16 23:53:28.220804 kernel: raid6: avx512x2 gen() 52744 MB/s Apr 16 23:53:28.238813 kernel: raid6: avx512x1 gen() 49062 MB/s Apr 16 23:53:28.257815 kernel: raid6: avx2x4 gen() 45810 MB/s Apr 16 23:53:28.275812 kernel: raid6: avx2x2 gen() 54139 MB/s Apr 16 23:53:28.294903 kernel: raid6: avx2x1 gen() 45041 MB/s Apr 16 23:53:28.294954 kernel: raid6: using algorithm avx2x2 gen() 54139 MB/s Apr 16 23:53:28.314953 kernel: raid6: .... xor() 35903 MB/s, rmw enabled Apr 16 23:53:28.315024 kernel: raid6: using avx512x2 recovery algorithm Apr 16 23:53:28.355811 kernel: xor: automatically using best checksumming function avx Apr 16 23:53:28.463821 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:53:28.471964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:53:28.476354 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:53:28.500636 systemd-udevd[445]: Using default interface naming scheme 'v255'. Apr 16 23:53:28.505908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:53:28.507858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:53:28.543547 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 16 23:53:28.576719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:53:28.579693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:53:28.653881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:53:28.658077 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:53:28.732001 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 16 23:53:28.732203 kernel: ACPI: bus type USB registered Apr 16 23:53:28.740811 kernel: scsi host0: Virtio SCSI HBA Apr 16 23:53:28.748369 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 23:53:28.748424 kernel: usbcore: registered new interface driver usbfs Apr 16 23:53:28.771723 kernel: usbcore: registered new interface driver hub Apr 16 23:53:28.771801 kernel: usbcore: registered new device driver usb Apr 16 23:53:28.783805 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 23:53:28.801808 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 23:53:28.812823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:53:28.812953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:28.814051 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:28.816995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:28.823827 kernel: AES CTR mode by8 optimization enabled Apr 16 23:53:28.826631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:53:28.835933 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 16 23:53:28.836127 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 16 23:53:28.836254 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 16 23:53:28.836390 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 16 23:53:28.836513 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 23:53:28.836634 kernel: libata version 3.00 loaded. Apr 16 23:53:28.827296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:28.843950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:28.875092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:53:28.875111 kernel: GPT:17805311 != 160006143 Apr 16 23:53:28.875120 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:53:28.875128 kernel: GPT:17805311 != 160006143 Apr 16 23:53:28.875136 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:53:28.875148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:53:28.875157 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 16 23:53:28.875336 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:53:28.875469 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 23:53:28.875586 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 23:53:28.868323 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:53:28.880616 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:53:28.881288 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 23:53:28.881427 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 23:53:28.882946 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 23:53:28.884637 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 23:53:28.889896 kernel: hub 1-0:1.0: USB hub found Apr 16 23:53:28.890057 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 23:53:28.893336 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 23:53:28.893490 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 23:53:28.894819 kernel: hub 1-0:1.0: 4 ports detected Apr 16 23:53:28.894997 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 23:53:28.897814 kernel: hub 2-0:1.0: USB hub found Apr 16 23:53:28.897969 kernel: scsi host1: ahci Apr 16 23:53:28.898096 kernel: hub 2-0:1.0: 4 ports detected Apr 16 23:53:28.901816 kernel: scsi host2: ahci Apr 16 23:53:28.906973 kernel: scsi host3: ahci Apr 16 23:53:28.913806 kernel: scsi host4: ahci Apr 16 23:53:28.914209 kernel: scsi host5: ahci Apr 16 23:53:28.918803 kernel: scsi host6: ahci Apr 16 23:53:28.932623 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 lpm-pol 1 Apr 16 23:53:28.932653 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 lpm-pol 1 Apr 16 23:53:28.932663 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 lpm-pol 1 Apr 16 23:53:28.932672 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 lpm-pol 1 Apr 16 23:53:28.932680 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 lpm-pol 1 Apr 16 23:53:28.932688 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 lpm-pol 1 Apr 16 23:53:28.954662 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 23:53:28.956031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:28.973639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 23:53:28.980055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 23:53:28.980643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 23:53:28.987316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:53:28.989290 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:53:29.003240 disk-uuid[643]: Primary Header is updated. Apr 16 23:53:29.003240 disk-uuid[643]: Secondary Entries is updated. Apr 16 23:53:29.003240 disk-uuid[643]: Secondary Header is updated. Apr 16 23:53:29.015812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:53:29.029830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:53:29.135808 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 23:53:29.246865 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 23:53:29.251872 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 23:53:29.257865 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 23:53:29.262848 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 16 23:53:29.262911 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 23:53:29.272632 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 23:53:29.284071 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:53:29.284117 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 23:53:29.284135 kernel: ata1.00: applying bridge limits Apr 16 23:53:29.290244 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:53:29.290286 kernel: ata1.00: configured for UDMA/100 Apr 16 23:53:29.294851 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:53:29.294914 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 23:53:29.328885 kernel: usbcore: registered new interface driver usbhid Apr 16 23:53:29.328945 kernel: usbhid: USB HID core driver Apr 16 23:53:29.341931 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Apr 16 23:53:29.341980 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 23:53:29.355693 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 23:53:29.355930 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 23:53:29.367807 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 16 23:53:29.635496 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:53:29.638069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:53:29.639144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:53:29.640254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:53:29.642716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:53:29.671449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:53:30.034862 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:53:30.036955 disk-uuid[644]: The operation has completed successfully. Apr 16 23:53:30.118115 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:53:30.118222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:53:30.137286 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:53:30.151582 sh[677]: Success Apr 16 23:53:30.173812 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:53:30.173855 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:53:30.177827 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:53:30.187818 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 23:53:30.226060 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:53:30.228064 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:53:30.236089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:53:30.245809 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (689) Apr 16 23:53:30.251120 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 16 23:53:30.251140 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:53:30.261218 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:53:30.261243 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:53:30.261253 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:53:30.264598 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:53:30.265311 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:53:30.266049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:53:30.267886 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:53:30.268748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:53:30.302822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (724) Apr 16 23:53:30.305821 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:53:30.309216 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:53:30.315221 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:53:30.315242 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:53:30.315256 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:53:30.323805 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:53:30.324524 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:53:30.326423 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:53:30.399334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:53:30.402883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:53:30.427195 ignition[789]: Ignition 2.22.0 Apr 16 23:53:30.427768 ignition[789]: Stage: fetch-offline Apr 16 23:53:30.428154 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:30.428163 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:30.428227 ignition[789]: parsed url from cmdline: "" Apr 16 23:53:30.428230 ignition[789]: no config URL provided Apr 16 23:53:30.428235 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:53:30.428241 ignition[789]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:53:30.428246 ignition[789]: failed to fetch config: resource requires networking Apr 16 23:53:30.428780 ignition[789]: Ignition finished successfully Apr 16 23:53:30.432569 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:53:30.436776 systemd-networkd[865]: lo: Link UP Apr 16 23:53:30.436800 systemd-networkd[865]: lo: Gained carrier Apr 16 23:53:30.439032 systemd-networkd[865]: Enumeration completed Apr 16 23:53:30.439092 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:53:30.439708 systemd-networkd[865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:30.439712 systemd-networkd[865]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:53:30.440008 systemd[1]: Reached target network.target - Network. Apr 16 23:53:30.441554 systemd-networkd[865]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:30.441558 systemd-networkd[865]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:53:30.442055 systemd-networkd[865]: eth0: Link UP Apr 16 23:53:30.442246 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:53:30.442661 systemd-networkd[865]: eth1: Link UP Apr 16 23:53:30.442848 systemd-networkd[865]: eth0: Gained carrier Apr 16 23:53:30.442857 systemd-networkd[865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:30.447111 systemd-networkd[865]: eth1: Gained carrier Apr 16 23:53:30.447119 systemd-networkd[865]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:30.462028 ignition[869]: Ignition 2.22.0 Apr 16 23:53:30.462036 ignition[869]: Stage: fetch Apr 16 23:53:30.462123 ignition[869]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:30.462131 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:30.462186 ignition[869]: parsed url from cmdline: "" Apr 16 23:53:30.462190 ignition[869]: no config URL provided Apr 16 23:53:30.462194 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:53:30.462200 ignition[869]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:53:30.462220 ignition[869]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 23:53:30.463041 ignition[869]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 23:53:30.480828 systemd-networkd[865]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:53:30.501817 systemd-networkd[865]: eth0: DHCPv4 address 77.42.47.3/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:53:30.663571 ignition[869]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 23:53:30.673131 ignition[869]: GET result: OK Apr 16 23:53:30.673261 ignition[869]: parsing config with SHA512: cb99b6e2f761cbcf35d4fe01a38faef85479fe6ddd11e3d9f76062ffe349aabde046bb054de8d16e14ac4b0643913161d5931c9f7c7bdc007b5fea1be7505c2a Apr 16 23:53:30.678961 unknown[869]: fetched base config from "system" Apr 16 23:53:30.678981 unknown[869]: fetched base config from "system" Apr 16 23:53:30.679441 ignition[869]: fetch: fetch complete Apr 16 23:53:30.679004 unknown[869]: fetched user config from "hetzner" Apr 16 23:53:30.679453 ignition[869]: fetch: fetch passed Apr 16 23:53:30.679531 ignition[869]: Ignition finished successfully Apr 16 23:53:30.686174 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:53:30.689097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:53:30.742634 ignition[876]: Ignition 2.22.0 Apr 16 23:53:30.742661 ignition[876]: Stage: kargs Apr 16 23:53:30.742951 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:30.742980 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:30.744200 ignition[876]: kargs: kargs passed Apr 16 23:53:30.744281 ignition[876]: Ignition finished successfully Apr 16 23:53:30.747828 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:53:30.751999 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:53:30.791831 ignition[884]: Ignition 2.22.0 Apr 16 23:53:30.791839 ignition[884]: Stage: disks Apr 16 23:53:30.791932 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:30.791940 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:30.792504 ignition[884]: disks: disks passed Apr 16 23:53:30.792539 ignition[884]: Ignition finished successfully Apr 16 23:53:30.795940 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:53:30.797741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:53:30.798842 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:53:30.799562 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:53:30.800628 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:53:30.801622 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:53:30.804231 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:53:30.838668 systemd-fsck[893]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:53:30.843393 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:53:30.846623 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:53:30.951864 kernel: EXT4-fs (sda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 16 23:53:30.951663 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:53:30.952624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:53:30.954362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:53:30.956854 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:53:30.959021 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 23:53:30.960860 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:53:30.961497 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:53:30.981160 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:53:30.984045 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:53:30.996812 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (901) Apr 16 23:53:31.005361 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:53:31.005385 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:53:31.029719 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:53:31.029762 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:53:31.029772 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:53:31.035809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:53:31.039822 coreos-metadata[903]: Apr 16 23:53:31.039 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 23:53:31.040465 coreos-metadata[903]: Apr 16 23:53:31.039 INFO Fetch successful Apr 16 23:53:31.041433 coreos-metadata[903]: Apr 16 23:53:31.040 INFO wrote hostname ci-4459-2-4-n-84256b4514 to /sysroot/etc/hostname Apr 16 23:53:31.043172 initrd-setup-root[928]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:53:31.043269 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:53:31.048601 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:53:31.051653 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:53:31.054775 initrd-setup-root[950]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:53:31.130496 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:53:31.131733 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:53:31.132576 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:53:31.148810 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:53:31.163870 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:53:31.171689 ignition[1018]: INFO : Ignition 2.22.0 Apr 16 23:53:31.171689 ignition[1018]: INFO : Stage: mount Apr 16 23:53:31.172818 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:31.172818 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:31.173544 ignition[1018]: INFO : mount: mount passed Apr 16 23:53:31.173544 ignition[1018]: INFO : Ignition finished successfully Apr 16 23:53:31.175292 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:53:31.176441 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:53:31.243544 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:53:31.244891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:53:31.277852 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1029) Apr 16 23:53:31.285485 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:53:31.285533 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:53:31.299406 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:53:31.299466 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:53:31.307365 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:53:31.311106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:53:31.358856 ignition[1046]: INFO : Ignition 2.22.0 Apr 16 23:53:31.360906 ignition[1046]: INFO : Stage: files Apr 16 23:53:31.360906 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:31.360906 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:31.363517 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:53:31.363517 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:53:31.363517 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:53:31.366967 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:53:31.367934 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:53:31.368759 unknown[1046]: wrote ssh authorized keys file for user: core Apr 16 23:53:31.369741 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:53:31.372370 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:53:31.373584 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 23:53:31.617630 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:53:31.715941 systemd-networkd[865]: eth0: Gained IPv6LL Apr 16 23:53:31.843988 systemd-networkd[865]: eth1: Gained IPv6LL Apr 16 23:53:32.017367 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:53:32.017367 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:53:32.020901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 23:53:32.567300 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:53:33.906502 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 23:53:33.906502 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:53:33.910508 ignition[1046]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:53:33.926753 ignition[1046]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:53:33.926753 ignition[1046]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:53:33.926753 ignition[1046]: INFO : files: files passed Apr 16 23:53:33.926753 ignition[1046]: INFO : Ignition finished successfully Apr 16 23:53:33.916138 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:53:33.920007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:53:33.927999 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:53:33.946727 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:53:33.946957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:53:33.960693 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:53:33.962218 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:53:33.964687 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:53:33.968132 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:53:33.969286 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:53:33.971893 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:53:34.040777 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:53:34.040973 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:53:34.043987 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:53:34.045729 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:53:34.047000 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:53:34.050012 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:53:34.077476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:53:34.080883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:53:34.102592 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:53:34.104344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:53:34.105271 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:53:34.106658 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:53:34.106831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:53:34.108780 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:53:34.110204 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:53:34.111509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:53:34.112712 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:53:34.114233 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:53:34.115559 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:53:34.116912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:53:34.118243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:53:34.119621 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:53:34.120981 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:53:34.122321 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:53:34.123687 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:53:34.123929 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:53:34.125624 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:53:34.126970 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:53:34.128291 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:53:34.128875 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:53:34.130278 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:53:34.130438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:53:34.132439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:53:34.132596 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:53:34.133983 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:53:34.134191 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:53:34.135283 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 23:53:34.135499 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:53:34.137673 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:53:34.140354 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:53:34.140716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:53:34.141576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:53:34.142434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:53:34.142937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:53:34.146570 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:53:34.154995 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:53:34.171821 ignition[1099]: INFO : Ignition 2.22.0 Apr 16 23:53:34.171821 ignition[1099]: INFO : Stage: umount Apr 16 23:53:34.171821 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:53:34.171821 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:53:34.176832 ignition[1099]: INFO : umount: umount passed Apr 16 23:53:34.176832 ignition[1099]: INFO : Ignition finished successfully Apr 16 23:53:34.174436 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:53:34.174925 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:53:34.176121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:53:34.176692 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:53:34.176753 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:53:34.179442 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:53:34.179487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:53:34.180134 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:53:34.180167 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:53:34.180769 systemd[1]: Stopped target network.target - Network. Apr 16 23:53:34.181358 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:53:34.181398 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:53:34.181984 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:53:34.182563 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:53:34.183775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:53:34.184122 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:53:34.184740 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:53:34.185376 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:53:34.185411 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:53:34.185982 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:53:34.186012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:53:34.186557 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:53:34.186597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:53:34.187153 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:53:34.187186 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:53:34.187922 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:53:34.188463 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:53:34.189357 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:53:34.189440 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:53:34.190375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:53:34.190440 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:53:34.192490 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:53:34.192587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:53:34.196389 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:53:34.196968 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:53:34.197062 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:53:34.198560 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:53:34.199154 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:53:34.199694 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:53:34.199729 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:53:34.201860 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:53:34.202174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:53:34.202213 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:53:34.202575 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:53:34.202608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:53:34.204018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:53:34.204055 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:53:34.205408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:53:34.205444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:53:34.206537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:53:34.208566 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:53:34.208616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:53:34.222078 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:53:34.222223 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:53:34.222957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:53:34.223009 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:53:34.223374 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:53:34.223401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:53:34.224119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:53:34.224159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:53:34.225363 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:53:34.225401 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:53:34.226440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:53:34.226480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:53:34.228900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:53:34.229314 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:53:34.229365 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:53:34.232336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:53:34.232377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:53:34.233901 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 23:53:34.233936 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:53:34.234922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:53:34.234960 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:53:34.235410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:53:34.235444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:34.237178 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:53:34.237243 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 23:53:34.237279 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:53:34.237314 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:53:34.237610 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:53:34.239868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:53:34.243276 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:53:34.243371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:53:34.244337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:53:34.245429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:53:34.260237 systemd[1]: Switching root. Apr 16 23:53:34.300398 systemd-journald[197]: Journal stopped Apr 16 23:53:35.309381 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Apr 16 23:53:35.309448 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:53:35.309652 kernel: SELinux: policy capability open_perms=1 Apr 16 23:53:35.309665 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:53:35.309679 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:53:35.309687 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:53:35.309696 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:53:35.309705 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:53:35.309715 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:53:35.309723 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:53:35.309733 kernel: audit: type=1403 audit(1776383614.433:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:53:35.309747 systemd[1]: Successfully loaded SELinux policy in 53.010ms. Apr 16 23:53:35.309764 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.389ms. Apr 16 23:53:35.309774 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:53:35.309815 systemd[1]: Detected virtualization kvm. Apr 16 23:53:35.309824 systemd[1]: Detected architecture x86-64. Apr 16 23:53:35.309833 systemd[1]: Detected first boot. Apr 16 23:53:35.309844 systemd[1]: Hostname set to . Apr 16 23:53:35.309853 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:53:35.309862 zram_generator::config[1142]: No configuration found. Apr 16 23:53:35.309872 kernel: Guest personality initialized and is inactive Apr 16 23:53:35.309880 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 23:53:35.309888 kernel: Initialized host personality Apr 16 23:53:35.309897 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:53:35.309909 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:53:35.309919 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:53:35.309928 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:53:35.309937 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:53:35.310012 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:53:35.310022 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:53:35.313769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:53:35.313795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:53:35.313813 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:53:35.313826 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:53:35.313835 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:53:35.313844 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:53:35.313853 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:53:35.313861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:53:35.313870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:53:35.313880 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:53:35.313891 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:53:35.313900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:53:35.313909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:53:35.313918 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:53:35.313927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:53:35.313935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:53:35.313944 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:53:35.313953 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:53:35.313964 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:53:35.313973 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:53:35.313982 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:53:35.313991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:53:35.313999 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:53:35.314008 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:53:35.314017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:53:35.314025 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:53:35.314034 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:53:35.314045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:53:35.314054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:53:35.314062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:53:35.314071 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:53:35.314080 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:53:35.314089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:53:35.314097 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:53:35.314106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:35.314117 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:53:35.314128 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:53:35.314136 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:53:35.314145 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:53:35.314154 systemd[1]: Reached target machines.target - Containers. Apr 16 23:53:35.314163 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:53:35.314173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:53:35.314182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:53:35.314191 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:53:35.314202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:53:35.314211 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:53:35.314224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:53:35.314233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:53:35.314242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:53:35.314251 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:53:35.314260 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:53:35.314269 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:53:35.314283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:53:35.314292 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:53:35.314303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:53:35.314312 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:53:35.314322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:53:35.314340 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:53:35.314349 kernel: loop: module loaded Apr 16 23:53:35.314357 kernel: fuse: init (API version 7.41) Apr 16 23:53:35.314366 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:53:35.314375 kernel: ACPI: bus type drm_connector registered Apr 16 23:53:35.314383 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:53:35.314394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:53:35.314403 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:53:35.314412 systemd[1]: Stopped verity-setup.service. Apr 16 23:53:35.314421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:35.314429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:53:35.314459 systemd-journald[1219]: Collecting audit messages is disabled. Apr 16 23:53:35.314477 systemd-journald[1219]: Journal started Apr 16 23:53:35.314496 systemd-journald[1219]: Runtime Journal (/run/log/journal/40935e4284824813967529263e0333a7) is 8M, max 76.1M, 68.1M free. Apr 16 23:53:35.008579 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:53:35.017089 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:53:35.017494 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:53:35.321801 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:53:35.321842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:53:35.322293 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:53:35.322742 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:53:35.323193 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:53:35.323632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:53:35.324306 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:53:35.324915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:53:35.325501 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:53:35.325649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:53:35.326296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:53:35.326507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:53:35.327163 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:53:35.327361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:53:35.328018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:53:35.328214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:53:35.328861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:53:35.329048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:53:35.329660 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:53:35.329877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:53:35.330509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:53:35.331526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:53:35.332175 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:53:35.332873 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:53:35.342102 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:53:35.346861 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:53:35.347994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:53:35.349428 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:53:35.349452 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:53:35.350525 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:53:35.365884 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:53:35.366341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:53:35.368937 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:53:35.370887 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:53:35.371251 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:53:35.371932 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:53:35.372287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:53:35.375698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:53:35.378055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:53:35.379926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:53:35.381571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:53:35.383057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:53:35.419088 systemd-journald[1219]: Time spent on flushing to /var/log/journal/40935e4284824813967529263e0333a7 is 25.237ms for 1249 entries. Apr 16 23:53:35.419088 systemd-journald[1219]: System Journal (/var/log/journal/40935e4284824813967529263e0333a7) is 8M, max 584.8M, 576.8M free. Apr 16 23:53:35.478927 systemd-journald[1219]: Received client request to flush runtime journal. Apr 16 23:53:35.478989 kernel: loop0: detected capacity change from 0 to 110984 Apr 16 23:53:35.479013 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:53:35.423870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:53:35.424690 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:53:35.432639 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:53:35.437549 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 16 23:53:35.437559 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 16 23:53:35.445128 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:53:35.454383 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:53:35.455559 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:53:35.487124 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:53:35.498221 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:53:35.501458 kernel: loop1: detected capacity change from 0 to 8 Apr 16 23:53:35.503249 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:53:35.516812 kernel: loop2: detected capacity change from 0 to 128560 Apr 16 23:53:35.533950 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:53:35.537767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:53:35.543884 kernel: loop3: detected capacity change from 0 to 219192 Apr 16 23:53:35.572838 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Apr 16 23:53:35.572853 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Apr 16 23:53:35.577111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:53:35.601897 kernel: loop4: detected capacity change from 0 to 110984 Apr 16 23:53:35.624811 kernel: loop5: detected capacity change from 0 to 8 Apr 16 23:53:35.627810 kernel: loop6: detected capacity change from 0 to 128560 Apr 16 23:53:35.642834 kernel: loop7: detected capacity change from 0 to 219192 Apr 16 23:53:35.672701 (sd-merge)[1296]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 23:53:35.674006 (sd-merge)[1296]: Merged extensions into '/usr'. Apr 16 23:53:35.678807 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:53:35.678886 systemd[1]: Reloading... Apr 16 23:53:35.747876 zram_generator::config[1322]: No configuration found. Apr 16 23:53:35.836322 ldconfig[1262]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:53:35.917778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:53:35.918286 systemd[1]: Reloading finished in 238 ms. Apr 16 23:53:35.936239 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:53:35.937077 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:53:35.944027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:53:35.949285 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:53:35.951910 systemd[1]: Starting ensure-sysext.service... Apr 16 23:53:35.954839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:53:35.974878 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:53:35.974889 systemd[1]: Reloading... Apr 16 23:53:35.983227 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Apr 16 23:53:35.986856 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:53:35.986883 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:53:35.987137 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:53:35.987359 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:53:35.988083 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:53:35.988283 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Apr 16 23:53:35.988401 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Apr 16 23:53:35.996563 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:53:35.996864 systemd-tmpfiles[1368]: Skipping /boot Apr 16 23:53:36.012205 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:53:36.012902 systemd-tmpfiles[1368]: Skipping /boot Apr 16 23:53:36.075812 zram_generator::config[1399]: No configuration found. Apr 16 23:53:36.223826 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Apr 16 23:53:36.285831 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:53:36.307807 kernel: ACPI: button: Power Button [PWRF] Apr 16 23:53:36.320897 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:53:36.322032 systemd[1]: Reloading finished in 346 ms. Apr 16 23:53:36.333240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:53:36.334813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:53:36.338006 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 16 23:53:36.339322 kernel: Console: switching to colour dummy device 80x25 Apr 16 23:53:36.340802 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 16 23:53:36.344114 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 23:53:36.344150 kernel: [drm] features: -context_init Apr 16 23:53:36.348804 kernel: [drm] number of scanouts: 1 Apr 16 23:53:36.350805 kernel: [drm] number of cap sets: 0 Apr 16 23:53:36.353800 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 16 23:53:36.363363 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 16 23:53:36.363409 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:53:36.370821 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 23:53:36.373800 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 23:53:36.374009 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 23:53:36.378801 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 23:53:36.387690 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 23:53:36.416580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:36.420014 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:53:36.422519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:53:36.423572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:53:36.425045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:53:36.430920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:53:36.433036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:53:36.433211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:53:36.433317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:53:36.434417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:53:36.438889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:53:36.444090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:53:36.448622 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:53:36.453029 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:36.453188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:36.456629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:53:36.464592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:53:36.469735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:53:36.470216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:53:36.479966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:36.480158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:53:36.483604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:53:36.486366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:53:36.490110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:53:36.491389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:53:36.491501 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:53:36.495478 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:53:36.495547 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:53:36.499564 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:53:36.499873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:53:36.505934 systemd[1]: Finished ensure-sysext.service. Apr 16 23:53:36.512717 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:53:36.524609 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 23:53:36.543106 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:53:36.543290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:53:36.546193 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:53:36.548813 kernel: EDAC MC: Ver: 3.0.0 Apr 16 23:53:36.554408 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:53:36.556731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:53:36.557695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:53:36.565611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:53:36.566251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:53:36.574638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:53:36.579087 augenrules[1551]: No rules Apr 16 23:53:36.582463 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:53:36.583090 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:53:36.587663 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:53:36.597201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:53:36.598439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:53:36.598493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:53:36.600597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:53:36.601095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:36.605596 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:53:36.608860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:53:36.636248 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:53:36.638585 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:53:36.641194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:53:36.670633 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:53:36.680578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:53:36.743609 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 23:53:36.745410 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:53:36.750696 systemd-networkd[1508]: lo: Link UP Apr 16 23:53:36.750703 systemd-networkd[1508]: lo: Gained carrier Apr 16 23:53:36.753499 systemd-networkd[1508]: Enumeration completed Apr 16 23:53:36.753566 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:53:36.755901 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:36.755907 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:53:36.756390 systemd-networkd[1508]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:36.756394 systemd-networkd[1508]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:53:36.756887 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:53:36.757029 systemd-networkd[1508]: eth0: Link UP Apr 16 23:53:36.757343 systemd-networkd[1508]: eth1: Link UP Apr 16 23:53:36.757477 systemd-networkd[1508]: eth0: Gained carrier Apr 16 23:53:36.757488 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:36.759957 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:53:36.760248 systemd-resolved[1510]: Positive Trust Anchors: Apr 16 23:53:36.760257 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:53:36.760277 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:53:36.761011 systemd-networkd[1508]: eth1: Gained carrier Apr 16 23:53:36.761023 systemd-networkd[1508]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:53:36.765644 systemd-resolved[1510]: Using system hostname 'ci-4459-2-4-n-84256b4514'. Apr 16 23:53:36.766918 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:53:36.769437 systemd[1]: Reached target network.target - Network. Apr 16 23:53:36.769751 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:53:36.770144 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:53:36.770530 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:53:36.773814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:53:36.774163 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 23:53:36.774732 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:53:36.775153 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:53:36.775483 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:53:36.775959 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:53:36.776019 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:53:36.777750 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:53:36.779422 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:53:36.781212 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:53:36.784597 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:53:36.785178 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:53:36.785580 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:53:36.786841 systemd-networkd[1508]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:53:36.787997 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Apr 16 23:53:36.789410 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:53:36.791772 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:53:36.793029 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:53:36.794703 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:53:36.796117 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:53:36.797794 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:53:36.798581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:53:36.798613 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:53:36.799562 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:53:36.803078 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:53:36.807962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:53:36.811183 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:53:36.814848 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:53:36.815833 systemd-networkd[1508]: eth0: DHCPv4 address 77.42.47.3/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:53:36.816018 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:53:36.817419 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:53:36.819145 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Apr 16 23:53:36.821884 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 23:53:36.824909 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:53:36.830308 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:53:36.832528 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 23:53:36.837498 jq[1589]: false Apr 16 23:53:36.840554 coreos-metadata[1584]: Apr 16 23:53:36.840 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 23:53:36.840849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:53:36.842962 coreos-metadata[1584]: Apr 16 23:53:36.842 INFO Fetch successful Apr 16 23:53:36.843477 coreos-metadata[1584]: Apr 16 23:53:36.842 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 23:53:36.844112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:53:36.844318 coreos-metadata[1584]: Apr 16 23:53:36.844 INFO Fetch successful Apr 16 23:53:36.850780 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:53:36.852720 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:53:36.853096 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:53:36.855946 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:53:36.858375 extend-filesystems[1590]: Found /dev/sda6 Apr 16 23:53:36.858541 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:53:36.870960 extend-filesystems[1590]: Found /dev/sda9 Apr 16 23:53:36.871861 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:53:36.873114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:53:36.873303 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:53:36.878498 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Apr 16 23:53:36.877954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:53:36.876650 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Apr 16 23:53:36.880055 extend-filesystems[1590]: Checking size of /dev/sda9 Apr 16 23:53:36.879501 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:53:36.882843 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Apr 16 23:53:36.882843 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:53:36.882843 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Apr 16 23:53:36.882537 oslogin_cache_refresh[1591]: Failure getting users, quitting Apr 16 23:53:36.882552 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:53:36.882584 oslogin_cache_refresh[1591]: Refreshing group entry cache Apr 16 23:53:36.886495 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Apr 16 23:53:36.886495 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:53:36.884610 oslogin_cache_refresh[1591]: Failure getting groups, quitting Apr 16 23:53:36.884620 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:53:36.888163 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 23:53:36.888506 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 23:53:36.890820 jq[1604]: true Apr 16 23:53:36.909756 extend-filesystems[1590]: Resized partition /dev/sda9 Apr 16 23:53:36.919829 extend-filesystems[1636]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:53:36.922489 tar[1607]: linux-amd64/LICENSE Apr 16 23:53:36.922489 tar[1607]: linux-amd64/helm Apr 16 23:53:36.925253 update_engine[1602]: I20260416 23:53:36.923693 1602 main.cc:92] Flatcar Update Engine starting Apr 16 23:53:36.931175 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 16 23:53:36.934210 (ntainerd)[1628]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:53:36.943115 jq[1621]: true Apr 16 23:53:36.950413 dbus-daemon[1585]: [system] SELinux support is enabled Apr 16 23:53:36.950565 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:53:36.957487 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:53:36.958077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:53:36.961637 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:53:36.961652 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:53:36.966427 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:53:36.969887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:53:36.982861 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:53:36.983040 update_engine[1602]: I20260416 23:53:36.982913 1602 update_check_scheduler.cc:74] Next update check in 9m52s Apr 16 23:53:37.007868 systemd-logind[1601]: New seat seat0. Apr 16 23:53:37.011261 systemd-logind[1601]: Watching system buttons on /dev/input/event3 (Power Button) Apr 16 23:53:37.011281 systemd-logind[1601]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 23:53:37.015133 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:53:37.018536 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:53:37.022755 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:53:37.030247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:53:37.126809 bash[1667]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:53:37.119960 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:53:37.129284 systemd[1]: Starting sshkeys.service... Apr 16 23:53:37.168598 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:53:37.173870 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:53:37.207044 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:53:37.227392 containerd[1628]: time="2026-04-16T23:53:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:53:37.228416 containerd[1628]: time="2026-04-16T23:53:37.228313916Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:53:37.229112 coreos-metadata[1673]: Apr 16 23:53:37.229 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 23:53:37.230133 coreos-metadata[1673]: Apr 16 23:53:37.230 INFO Fetch successful Apr 16 23:53:37.235232 containerd[1628]: time="2026-04-16T23:53:37.235193783Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.08µs" Apr 16 23:53:37.235232 containerd[1628]: time="2026-04-16T23:53:37.235214443Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:53:37.235232 containerd[1628]: time="2026-04-16T23:53:37.235227433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:53:37.239241 containerd[1628]: time="2026-04-16T23:53:37.239222891Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:53:37.239273 containerd[1628]: time="2026-04-16T23:53:37.239243991Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:53:37.239273 containerd[1628]: time="2026-04-16T23:53:37.239262161Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239320 containerd[1628]: time="2026-04-16T23:53:37.239303661Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239320 containerd[1628]: time="2026-04-16T23:53:37.239312211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239523 containerd[1628]: time="2026-04-16T23:53:37.239507301Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239543 containerd[1628]: time="2026-04-16T23:53:37.239522061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239543 containerd[1628]: time="2026-04-16T23:53:37.239529171Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239543 containerd[1628]: time="2026-04-16T23:53:37.239534511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239607 containerd[1628]: time="2026-04-16T23:53:37.239592611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239765 containerd[1628]: time="2026-04-16T23:53:37.239751081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:53:37.239801 containerd[1628]: time="2026-04-16T23:53:37.239776611Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:53:37.240564 containerd[1628]: time="2026-04-16T23:53:37.240535791Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:53:37.240590 containerd[1628]: time="2026-04-16T23:53:37.240565661Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:53:37.240705 containerd[1628]: time="2026-04-16T23:53:37.240693231Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:53:37.241045 unknown[1673]: wrote ssh authorized keys file for user: core Apr 16 23:53:37.242912 containerd[1628]: time="2026-04-16T23:53:37.242867080Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:53:37.257710 containerd[1628]: time="2026-04-16T23:53:37.257685264Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:53:37.257746 containerd[1628]: time="2026-04-16T23:53:37.257719294Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:53:37.257746 containerd[1628]: time="2026-04-16T23:53:37.257728884Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:53:37.257746 containerd[1628]: time="2026-04-16T23:53:37.257738404Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:53:37.257846 containerd[1628]: time="2026-04-16T23:53:37.257757424Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:53:37.257846 containerd[1628]: time="2026-04-16T23:53:37.257764824Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:53:37.257846 containerd[1628]: time="2026-04-16T23:53:37.257773674Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:53:37.258181 containerd[1628]: time="2026-04-16T23:53:37.258163374Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:53:37.258199 containerd[1628]: time="2026-04-16T23:53:37.258191314Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:53:37.258218 containerd[1628]: time="2026-04-16T23:53:37.258200174Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:53:37.258322 containerd[1628]: time="2026-04-16T23:53:37.258308564Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:53:37.258344 containerd[1628]: time="2026-04-16T23:53:37.258324734Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:53:37.259042 containerd[1628]: time="2026-04-16T23:53:37.259026853Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:53:37.259063 containerd[1628]: time="2026-04-16T23:53:37.259047813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:53:37.259063 containerd[1628]: time="2026-04-16T23:53:37.259058343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:53:37.259096 containerd[1628]: time="2026-04-16T23:53:37.259067393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259074993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259397943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259410563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259417953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259426003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259433513Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259540043Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259575813Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259584533Z" level=info msg="Start snapshots syncer" Apr 16 23:53:37.260797 containerd[1628]: time="2026-04-16T23:53:37.259854193Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:53:37.260949 containerd[1628]: time="2026-04-16T23:53:37.260558183Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:53:37.260949 containerd[1628]: time="2026-04-16T23:53:37.260592653Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:53:37.262988 containerd[1628]: time="2026-04-16T23:53:37.262968672Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:53:37.264887 containerd[1628]: time="2026-04-16T23:53:37.264752451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:53:37.264917 containerd[1628]: time="2026-04-16T23:53:37.264903591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:53:37.264939 containerd[1628]: time="2026-04-16T23:53:37.264916511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:53:37.265147 containerd[1628]: time="2026-04-16T23:53:37.264927091Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:53:37.265147 containerd[1628]: time="2026-04-16T23:53:37.265113731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:53:37.265147 containerd[1628]: time="2026-04-16T23:53:37.265134421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:53:37.265311 containerd[1628]: time="2026-04-16T23:53:37.265145481Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:53:37.265422 containerd[1628]: time="2026-04-16T23:53:37.265403241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:53:37.265442 containerd[1628]: time="2026-04-16T23:53:37.265426161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:53:37.265442 containerd[1628]: time="2026-04-16T23:53:37.265438471Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:53:37.265755 containerd[1628]: time="2026-04-16T23:53:37.265737520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:53:37.265864 containerd[1628]: time="2026-04-16T23:53:37.265762240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:53:37.265879 containerd[1628]: time="2026-04-16T23:53:37.265863940Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:53:37.265894 containerd[1628]: time="2026-04-16T23:53:37.265875970Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:53:37.265894 containerd[1628]: time="2026-04-16T23:53:37.265884450Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265894680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265930340Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265950840Z" level=info msg="runtime interface created" Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265956850Z" level=info msg="created NRI interface" Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265979170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.265990990Z" level=info msg="Connect containerd service" Apr 16 23:53:37.270758 containerd[1628]: time="2026-04-16T23:53:37.266010240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:53:37.277583 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:53:37.278137 containerd[1628]: time="2026-04-16T23:53:37.278112445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:53:37.290814 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:53:37.291396 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:53:37.298800 systemd[1]: Finished sshkeys.service. Apr 16 23:53:37.311838 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 16 23:53:37.317665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:53:37.321935 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:53:37.338114 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355604023Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355651253Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355672603Z" level=info msg="Start subscribing containerd event" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355693823Z" level=info msg="Start recovering state" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355762403Z" level=info msg="Start event monitor" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355770193Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.355777423Z" level=info msg="Start streaming server" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.357230032Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.357240772Z" level=info msg="runtime interface starting up..." Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.357246352Z" level=info msg="starting plugins..." Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.357261992Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:53:37.359212 containerd[1628]: time="2026-04-16T23:53:37.357394102Z" level=info msg="containerd successfully booted in 0.130453s" Apr 16 23:53:37.338314 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:53:37.341570 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:53:37.357475 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:53:37.360177 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:53:37.362232 extend-filesystems[1636]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 23:53:37.362232 extend-filesystems[1636]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 16 23:53:37.362232 extend-filesystems[1636]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 16 23:53:37.370019 extend-filesystems[1590]: Resized filesystem in /dev/sda9 Apr 16 23:53:37.365572 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:53:37.366520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:53:37.374018 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:53:37.377987 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:53:37.379697 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:53:37.480138 tar[1607]: linux-amd64/README.md Apr 16 23:53:37.491505 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:53:37.891861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:53:37.895755 systemd[1]: Started sshd@0-77.42.47.3:22-4.175.71.9:32872.service - OpenSSH per-connection server daemon (4.175.71.9:32872). Apr 16 23:53:38.130369 sshd[1720]: Accepted publickey for core from 4.175.71.9 port 32872 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:38.133687 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:38.145281 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:53:38.149032 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:53:38.166882 systemd-logind[1601]: New session 1 of user core. Apr 16 23:53:38.180670 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:53:38.189079 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:53:38.209757 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:53:38.214723 systemd-logind[1601]: New session c1 of user core. Apr 16 23:53:38.244020 systemd-networkd[1508]: eth0: Gained IPv6LL Apr 16 23:53:38.246583 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Apr 16 23:53:38.252299 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:53:38.255685 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:53:38.264020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:53:38.270123 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:53:38.308452 systemd-networkd[1508]: eth1: Gained IPv6LL Apr 16 23:53:38.308773 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Apr 16 23:53:38.310852 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:53:38.359200 systemd[1725]: Queued start job for default target default.target. Apr 16 23:53:38.367150 systemd[1725]: Created slice app.slice - User Application Slice. Apr 16 23:53:38.367174 systemd[1725]: Reached target paths.target - Paths. Apr 16 23:53:38.367284 systemd[1725]: Reached target timers.target - Timers. Apr 16 23:53:38.370858 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:53:38.378461 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:53:38.378510 systemd[1725]: Reached target sockets.target - Sockets. Apr 16 23:53:38.378626 systemd[1725]: Reached target basic.target - Basic System. Apr 16 23:53:38.378666 systemd[1725]: Reached target default.target - Main User Target. Apr 16 23:53:38.378694 systemd[1725]: Startup finished in 152ms. Apr 16 23:53:38.378769 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:53:38.386886 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:53:38.494868 systemd[1]: Started sshd@1-77.42.47.3:22-4.175.71.9:32882.service - OpenSSH per-connection server daemon (4.175.71.9:32882). Apr 16 23:53:38.689869 sshd[1748]: Accepted publickey for core from 4.175.71.9 port 32882 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:38.692320 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:38.700177 systemd-logind[1601]: New session 2 of user core. Apr 16 23:53:38.707412 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:53:38.784930 sshd[1751]: Connection closed by 4.175.71.9 port 32882 Apr 16 23:53:38.786135 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:38.796178 systemd[1]: sshd@1-77.42.47.3:22-4.175.71.9:32882.service: Deactivated successfully. Apr 16 23:53:38.796851 systemd-logind[1601]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:53:38.799975 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:53:38.803441 systemd-logind[1601]: Removed session 2. Apr 16 23:53:38.827424 systemd[1]: Started sshd@2-77.42.47.3:22-4.175.71.9:32898.service - OpenSSH per-connection server daemon (4.175.71.9:32898). Apr 16 23:53:38.987236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:53:38.992196 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:53:38.993908 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:53:38.994779 systemd[1]: Startup finished in 2.813s (kernel) + 6.768s (initrd) + 4.612s (userspace) = 14.193s. Apr 16 23:53:39.031912 sshd[1757]: Accepted publickey for core from 4.175.71.9 port 32898 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:39.034280 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:39.043185 systemd-logind[1601]: New session 3 of user core. Apr 16 23:53:39.047904 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:53:39.133725 sshd[1770]: Connection closed by 4.175.71.9 port 32898 Apr 16 23:53:39.134376 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:39.142909 systemd-logind[1601]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:53:39.143622 systemd[1]: sshd@2-77.42.47.3:22-4.175.71.9:32898.service: Deactivated successfully. Apr 16 23:53:39.148492 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:53:39.152253 systemd-logind[1601]: Removed session 3. Apr 16 23:53:39.429637 kubelet[1764]: E0416 23:53:39.429532 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:53:39.433062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:53:39.433516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:53:39.434432 systemd[1]: kubelet.service: Consumed 721ms CPU time, 256.7M memory peak. Apr 16 23:53:49.181741 systemd[1]: Started sshd@3-77.42.47.3:22-4.175.71.9:51902.service - OpenSSH per-connection server daemon (4.175.71.9:51902). Apr 16 23:53:49.397515 sshd[1782]: Accepted publickey for core from 4.175.71.9 port 51902 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:49.400013 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:49.408872 systemd-logind[1601]: New session 4 of user core. Apr 16 23:53:49.416024 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:53:49.468824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:53:49.471752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:53:49.499955 sshd[1785]: Connection closed by 4.175.71.9 port 51902 Apr 16 23:53:49.502100 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:49.509942 systemd[1]: sshd@3-77.42.47.3:22-4.175.71.9:51902.service: Deactivated successfully. Apr 16 23:53:49.510863 systemd-logind[1601]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:53:49.515649 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:53:49.522282 systemd-logind[1601]: Removed session 4. Apr 16 23:53:49.548144 systemd[1]: Started sshd@4-77.42.47.3:22-4.175.71.9:51912.service - OpenSSH per-connection server daemon (4.175.71.9:51912). Apr 16 23:53:49.638502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:53:49.645100 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:53:49.676637 kubelet[1801]: E0416 23:53:49.676536 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:53:49.683079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:53:49.683241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:53:49.683740 systemd[1]: kubelet.service: Consumed 174ms CPU time, 110.5M memory peak. Apr 16 23:53:49.751963 sshd[1794]: Accepted publickey for core from 4.175.71.9 port 51912 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:49.753974 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:49.763897 systemd-logind[1601]: New session 5 of user core. Apr 16 23:53:49.771043 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:53:49.844404 sshd[1810]: Connection closed by 4.175.71.9 port 51912 Apr 16 23:53:49.845355 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:49.851324 systemd[1]: sshd@4-77.42.47.3:22-4.175.71.9:51912.service: Deactivated successfully. Apr 16 23:53:49.854536 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:53:49.856741 systemd-logind[1601]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:53:49.859879 systemd-logind[1601]: Removed session 5. Apr 16 23:53:49.894259 systemd[1]: Started sshd@5-77.42.47.3:22-4.175.71.9:51924.service - OpenSSH per-connection server daemon (4.175.71.9:51924). Apr 16 23:53:50.099239 sshd[1816]: Accepted publickey for core from 4.175.71.9 port 51924 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:50.102155 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:50.109852 systemd-logind[1601]: New session 6 of user core. Apr 16 23:53:50.119087 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:53:50.196649 sshd[1819]: Connection closed by 4.175.71.9 port 51924 Apr 16 23:53:50.198004 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:50.202019 systemd-logind[1601]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:53:50.202748 systemd[1]: sshd@5-77.42.47.3:22-4.175.71.9:51924.service: Deactivated successfully. Apr 16 23:53:50.205147 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:53:50.206962 systemd-logind[1601]: Removed session 6. Apr 16 23:53:50.240256 systemd[1]: Started sshd@6-77.42.47.3:22-4.175.71.9:51936.service - OpenSSH per-connection server daemon (4.175.71.9:51936). Apr 16 23:53:50.437946 sshd[1825]: Accepted publickey for core from 4.175.71.9 port 51936 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:50.440760 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:50.449884 systemd-logind[1601]: New session 7 of user core. Apr 16 23:53:50.457069 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:53:50.520223 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:53:50.520891 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:53:50.538909 sudo[1829]: pam_unix(sudo:session): session closed for user root Apr 16 23:53:50.570620 sshd[1828]: Connection closed by 4.175.71.9 port 51936 Apr 16 23:53:50.571852 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:50.578030 systemd[1]: sshd@6-77.42.47.3:22-4.175.71.9:51936.service: Deactivated successfully. Apr 16 23:53:50.581651 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:53:50.585197 systemd-logind[1601]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:53:50.587305 systemd-logind[1601]: Removed session 7. Apr 16 23:53:50.622716 systemd[1]: Started sshd@7-77.42.47.3:22-4.175.71.9:51944.service - OpenSSH per-connection server daemon (4.175.71.9:51944). Apr 16 23:53:50.839864 sshd[1835]: Accepted publickey for core from 4.175.71.9 port 51944 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:50.842759 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:50.851869 systemd-logind[1601]: New session 8 of user core. Apr 16 23:53:50.859136 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:53:50.913714 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:53:50.914582 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:53:50.920183 sudo[1840]: pam_unix(sudo:session): session closed for user root Apr 16 23:53:50.930971 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:53:50.931655 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:53:50.946776 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:53:50.998046 augenrules[1862]: No rules Apr 16 23:53:50.999911 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:53:51.000279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:53:51.001882 sudo[1839]: pam_unix(sudo:session): session closed for user root Apr 16 23:53:51.033019 sshd[1838]: Connection closed by 4.175.71.9 port 51944 Apr 16 23:53:51.034995 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Apr 16 23:53:51.039901 systemd[1]: sshd@7-77.42.47.3:22-4.175.71.9:51944.service: Deactivated successfully. Apr 16 23:53:51.042240 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:53:51.045181 systemd-logind[1601]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:53:51.046628 systemd-logind[1601]: Removed session 8. Apr 16 23:53:51.082563 systemd[1]: Started sshd@8-77.42.47.3:22-4.175.71.9:51952.service - OpenSSH per-connection server daemon (4.175.71.9:51952). Apr 16 23:53:51.298898 sshd[1871]: Accepted publickey for core from 4.175.71.9 port 51952 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:53:51.301469 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:53:51.311904 systemd-logind[1601]: New session 9 of user core. Apr 16 23:53:51.318933 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:53:51.367014 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:53:51.367384 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:53:51.649351 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:53:51.662548 (dockerd)[1892]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:53:51.857818 dockerd[1892]: time="2026-04-16T23:53:51.856712491Z" level=info msg="Starting up" Apr 16 23:53:51.858805 dockerd[1892]: time="2026-04-16T23:53:51.858737020Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:53:51.874062 dockerd[1892]: time="2026-04-16T23:53:51.873992604Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:53:51.905605 systemd[1]: var-lib-docker-metacopy\x2dcheck2285939061-merged.mount: Deactivated successfully. Apr 16 23:53:51.931649 dockerd[1892]: time="2026-04-16T23:53:51.931603880Z" level=info msg="Loading containers: start." Apr 16 23:53:51.940816 kernel: Initializing XFRM netlink socket Apr 16 23:53:52.139866 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Apr 16 23:53:52.761362 systemd-resolved[1510]: Clock change detected. Flushing caches. Apr 16 23:53:52.761956 systemd-timesyncd[1537]: Contacted time server 104.167.24.26:123 (2.flatcar.pool.ntp.org). Apr 16 23:53:52.762005 systemd-timesyncd[1537]: Initial clock synchronization to Thu 2026-04-16 23:53:52.760971 UTC. Apr 16 23:53:52.768638 systemd-networkd[1508]: docker0: Link UP Apr 16 23:53:52.772631 dockerd[1892]: time="2026-04-16T23:53:52.772600463Z" level=info msg="Loading containers: done." Apr 16 23:53:52.784589 dockerd[1892]: time="2026-04-16T23:53:52.784553758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:53:52.784724 dockerd[1892]: time="2026-04-16T23:53:52.784617348Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:53:52.784724 dockerd[1892]: time="2026-04-16T23:53:52.784679978Z" level=info msg="Initializing buildkit" Apr 16 23:53:52.807791 dockerd[1892]: time="2026-04-16T23:53:52.807626428Z" level=info msg="Completed buildkit initialization" Apr 16 23:53:52.813182 dockerd[1892]: time="2026-04-16T23:53:52.813143526Z" level=info msg="Daemon has completed initialization" Apr 16 23:53:52.813422 dockerd[1892]: time="2026-04-16T23:53:52.813307676Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:53:52.813388 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:53:53.246510 containerd[1628]: time="2026-04-16T23:53:53.246296345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 23:53:53.871585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474823088.mount: Deactivated successfully. Apr 16 23:53:54.751532 containerd[1628]: time="2026-04-16T23:53:54.751485098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:54.752615 containerd[1628]: time="2026-04-16T23:53:54.752432288Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100614" Apr 16 23:53:54.753367 containerd[1628]: time="2026-04-16T23:53:54.753351077Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:54.755362 containerd[1628]: time="2026-04-16T23:53:54.755344387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:54.755860 containerd[1628]: time="2026-04-16T23:53:54.755838876Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.509506211s" Apr 16 23:53:54.755890 containerd[1628]: time="2026-04-16T23:53:54.755864506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 23:53:54.756338 containerd[1628]: time="2026-04-16T23:53:54.756305476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 23:53:56.032332 containerd[1628]: time="2026-04-16T23:53:56.032264735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:56.033552 containerd[1628]: time="2026-04-16T23:53:56.033431274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252760" Apr 16 23:53:56.034476 containerd[1628]: time="2026-04-16T23:53:56.034448294Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:56.036719 containerd[1628]: time="2026-04-16T23:53:56.036689853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:56.037684 containerd[1628]: time="2026-04-16T23:53:56.037663072Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.281326386s" Apr 16 23:53:56.037731 containerd[1628]: time="2026-04-16T23:53:56.037689132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 23:53:56.040108 containerd[1628]: time="2026-04-16T23:53:56.039970971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 23:53:57.025868 containerd[1628]: time="2026-04-16T23:53:57.025821871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:57.026828 containerd[1628]: time="2026-04-16T23:53:57.026651130Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810913" Apr 16 23:53:57.027814 containerd[1628]: time="2026-04-16T23:53:57.027791110Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:57.030084 containerd[1628]: time="2026-04-16T23:53:57.030048449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:57.030957 containerd[1628]: time="2026-04-16T23:53:57.030937168Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 990.945717ms" Apr 16 23:53:57.030992 containerd[1628]: time="2026-04-16T23:53:57.030961478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 23:53:57.031437 containerd[1628]: time="2026-04-16T23:53:57.031412408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 23:53:58.053179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860310072.mount: Deactivated successfully. Apr 16 23:53:58.265830 containerd[1628]: time="2026-04-16T23:53:58.265771924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:58.266754 containerd[1628]: time="2026-04-16T23:53:58.266672974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972982" Apr 16 23:53:58.267358 containerd[1628]: time="2026-04-16T23:53:58.267339443Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:58.268623 containerd[1628]: time="2026-04-16T23:53:58.268601453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:53:58.269164 containerd[1628]: time="2026-04-16T23:53:58.269026643Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.237593395s" Apr 16 23:53:58.269164 containerd[1628]: time="2026-04-16T23:53:58.269046383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 23:53:58.269437 containerd[1628]: time="2026-04-16T23:53:58.269389152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 23:53:58.826801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922297918.mount: Deactivated successfully. Apr 16 23:54:00.052636 containerd[1628]: time="2026-04-16T23:54:00.052573459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.053860 containerd[1628]: time="2026-04-16T23:54:00.053655729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Apr 16 23:54:00.054710 containerd[1628]: time="2026-04-16T23:54:00.054687309Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.056864 containerd[1628]: time="2026-04-16T23:54:00.056838828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.057563 containerd[1628]: time="2026-04-16T23:54:00.057535787Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.788128685s" Apr 16 23:54:00.057631 containerd[1628]: time="2026-04-16T23:54:00.057620667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 23:54:00.058114 containerd[1628]: time="2026-04-16T23:54:00.058086247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 23:54:00.275166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:54:00.278451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:54:00.459750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:00.470600 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:54:00.500424 kubelet[2236]: E0416 23:54:00.500373 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:54:00.504993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:54:00.505161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:54:00.505698 systemd[1]: kubelet.service: Consumed 176ms CPU time, 110.4M memory peak. Apr 16 23:54:00.542633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547085527.mount: Deactivated successfully. Apr 16 23:54:00.550191 containerd[1628]: time="2026-04-16T23:54:00.549745532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.550839 containerd[1628]: time="2026-04-16T23:54:00.550825552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 16 23:54:00.552432 containerd[1628]: time="2026-04-16T23:54:00.552418901Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.558140 containerd[1628]: time="2026-04-16T23:54:00.558124209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:00.559269 containerd[1628]: time="2026-04-16T23:54:00.559226688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 501.105061ms" Apr 16 23:54:00.559306 containerd[1628]: time="2026-04-16T23:54:00.559278588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 23:54:00.559828 containerd[1628]: time="2026-04-16T23:54:00.559802928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 23:54:01.088273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167396013.mount: Deactivated successfully. Apr 16 23:54:02.097685 containerd[1628]: time="2026-04-16T23:54:02.097633587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:02.098556 containerd[1628]: time="2026-04-16T23:54:02.098422777Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874917" Apr 16 23:54:02.099452 containerd[1628]: time="2026-04-16T23:54:02.099434087Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:02.101538 containerd[1628]: time="2026-04-16T23:54:02.101514166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:02.102339 containerd[1628]: time="2026-04-16T23:54:02.102145805Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.542314687s" Apr 16 23:54:02.102339 containerd[1628]: time="2026-04-16T23:54:02.102165285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 23:54:04.122394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:04.122582 systemd[1]: kubelet.service: Consumed 176ms CPU time, 110.4M memory peak. Apr 16 23:54:04.124505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:54:04.151262 systemd[1]: Reload requested from client PID 2337 ('systemctl') (unit session-9.scope)... Apr 16 23:54:04.151418 systemd[1]: Reloading... Apr 16 23:54:04.269394 zram_generator::config[2381]: No configuration found. Apr 16 23:54:04.439078 systemd[1]: Reloading finished in 287 ms. Apr 16 23:54:04.477961 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:54:04.478047 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:54:04.478290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:04.478340 systemd[1]: kubelet.service: Consumed 105ms CPU time, 98.2M memory peak. Apr 16 23:54:04.481465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:54:04.622920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:04.630802 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:54:04.681136 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:54:04.681136 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:54:04.681493 kubelet[2433]: I0416 23:54:04.681173 2433 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:54:05.062138 kubelet[2433]: I0416 23:54:05.062085 2433 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:54:05.062138 kubelet[2433]: I0416 23:54:05.062105 2433 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:54:05.062138 kubelet[2433]: I0416 23:54:05.062128 2433 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:54:05.062138 kubelet[2433]: I0416 23:54:05.062133 2433 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:54:05.062498 kubelet[2433]: I0416 23:54:05.062286 2433 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:54:05.069987 kubelet[2433]: E0416 23:54:05.069943 2433 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://77.42.47.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:54:05.074786 kubelet[2433]: I0416 23:54:05.074305 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:54:05.078047 kubelet[2433]: I0416 23:54:05.078010 2433 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:54:05.081004 kubelet[2433]: I0416 23:54:05.080984 2433 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:54:05.081832 kubelet[2433]: I0416 23:54:05.081795 2433 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:54:05.081961 kubelet[2433]: I0416 23:54:05.081816 2433 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-84256b4514","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:54:05.081961 kubelet[2433]: I0416 23:54:05.081925 2433 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:54:05.081961 kubelet[2433]: I0416 23:54:05.081932 2433 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:54:05.082227 kubelet[2433]: I0416 23:54:05.082019 2433 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:54:05.084961 kubelet[2433]: I0416 23:54:05.084908 2433 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:54:05.085137 kubelet[2433]: I0416 23:54:05.085106 2433 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:54:05.085137 kubelet[2433]: I0416 23:54:05.085125 2433 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:54:05.085235 kubelet[2433]: I0416 23:54:05.085150 2433 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:54:05.085235 kubelet[2433]: I0416 23:54:05.085170 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:54:05.087589 kubelet[2433]: E0416 23:54:05.087554 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.47.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-84256b4514&limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:54:05.087682 kubelet[2433]: E0416 23:54:05.087620 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.47.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:54:05.089396 kubelet[2433]: I0416 23:54:05.087853 2433 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:54:05.089396 kubelet[2433]: I0416 23:54:05.088189 2433 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:54:05.089396 kubelet[2433]: I0416 23:54:05.088207 2433 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:54:05.089396 kubelet[2433]: W0416 23:54:05.088251 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:54:05.092071 kubelet[2433]: I0416 23:54:05.091542 2433 server.go:1262] "Started kubelet" Apr 16 23:54:05.095682 kubelet[2433]: I0416 23:54:05.095636 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:54:05.097264 kubelet[2433]: E0416 23:54:05.095352 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://77.42.47.3:6443/api/v1/namespaces/default/events\": dial tcp 77.42.47.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-84256b4514.18a6fb83767e7156 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-84256b4514,UID:ci-4459-2-4-n-84256b4514,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-84256b4514,},FirstTimestamp:2026-04-16 23:54:05.09152495 +0000 UTC m=+0.453105622,LastTimestamp:2026-04-16 23:54:05.09152495 +0000 UTC m=+0.453105622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-84256b4514,}" Apr 16 23:54:05.103055 kubelet[2433]: I0416 23:54:05.103017 2433 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:54:05.103173 kubelet[2433]: E0416 23:54:05.103157 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:05.104626 kubelet[2433]: I0416 23:54:05.104303 2433 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:54:05.105686 kubelet[2433]: I0416 23:54:05.105392 2433 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:54:05.105686 kubelet[2433]: I0416 23:54:05.105431 2433 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:54:05.107233 kubelet[2433]: I0416 23:54:05.107192 2433 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:54:05.110072 kubelet[2433]: E0416 23:54:05.110019 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.47.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-84256b4514?timeout=10s\": dial tcp 77.42.47.3:6443: connect: connection refused" interval="200ms" Apr 16 23:54:05.111903 kubelet[2433]: I0416 23:54:05.110578 2433 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:54:05.111903 kubelet[2433]: I0416 23:54:05.110711 2433 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:54:05.119332 kubelet[2433]: E0416 23:54:05.118870 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.47.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:54:05.119332 kubelet[2433]: I0416 23:54:05.119122 2433 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:54:05.119332 kubelet[2433]: I0416 23:54:05.119169 2433 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:54:05.119477 kubelet[2433]: I0416 23:54:05.119467 2433 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:54:05.119728 kubelet[2433]: I0416 23:54:05.119718 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:54:05.120725 kubelet[2433]: I0416 23:54:05.120715 2433 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:54:05.125035 kubelet[2433]: I0416 23:54:05.124919 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:54:05.127926 kubelet[2433]: I0416 23:54:05.127892 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:54:05.127975 kubelet[2433]: I0416 23:54:05.127929 2433 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:54:05.127975 kubelet[2433]: I0416 23:54:05.127973 2433 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:54:05.128085 kubelet[2433]: E0416 23:54:05.128058 2433 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:54:05.130274 kubelet[2433]: E0416 23:54:05.130236 2433 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:54:05.130629 kubelet[2433]: E0416 23:54:05.130593 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.47.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:54:05.145086 kubelet[2433]: I0416 23:54:05.145056 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:54:05.145086 kubelet[2433]: I0416 23:54:05.145074 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:54:05.145159 kubelet[2433]: I0416 23:54:05.145105 2433 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:54:05.147188 kubelet[2433]: I0416 23:54:05.147167 2433 policy_none.go:49] "None policy: Start" Apr 16 23:54:05.147188 kubelet[2433]: I0416 23:54:05.147184 2433 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:54:05.147245 kubelet[2433]: I0416 23:54:05.147193 2433 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:54:05.148576 kubelet[2433]: I0416 23:54:05.148555 2433 policy_none.go:47] "Start" Apr 16 23:54:05.155756 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:54:05.166690 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:54:05.169577 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:54:05.177204 kubelet[2433]: E0416 23:54:05.177079 2433 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:54:05.177680 kubelet[2433]: I0416 23:54:05.177663 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:54:05.177714 kubelet[2433]: I0416 23:54:05.177692 2433 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:54:05.178731 kubelet[2433]: I0416 23:54:05.178721 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:54:05.179519 kubelet[2433]: E0416 23:54:05.179499 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:54:05.179559 kubelet[2433]: E0416 23:54:05.179534 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:05.247902 systemd[1]: Created slice kubepods-burstable-poda336a574033aef8c8c25cea2ab405ffd.slice - libcontainer container kubepods-burstable-poda336a574033aef8c8c25cea2ab405ffd.slice. Apr 16 23:54:05.276259 kubelet[2433]: E0416 23:54:05.276210 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.282904 kubelet[2433]: I0416 23:54:05.281918 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.283135 systemd[1]: Created slice kubepods-burstable-podd75076f2849ae4ac692cad8d2610b9e9.slice - libcontainer container kubepods-burstable-podd75076f2849ae4ac692cad8d2610b9e9.slice. Apr 16 23:54:05.283728 kubelet[2433]: E0416 23:54:05.283491 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.47.3:6443/api/v1/nodes\": dial tcp 77.42.47.3:6443: connect: connection refused" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.287809 kubelet[2433]: E0416 23:54:05.287769 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.292841 systemd[1]: Created slice kubepods-burstable-pod123a18964d067348943fd850169b7183.slice - libcontainer container kubepods-burstable-pod123a18964d067348943fd850169b7183.slice. Apr 16 23:54:05.295632 kubelet[2433]: E0416 23:54:05.295595 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.311616 kubelet[2433]: E0416 23:54:05.311577 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.47.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-84256b4514?timeout=10s\": dial tcp 77.42.47.3:6443: connect: connection refused" interval="400ms" Apr 16 23:54:05.406422 kubelet[2433]: I0416 23:54:05.406236 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406422 kubelet[2433]: I0416 23:54:05.406290 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406422 kubelet[2433]: I0416 23:54:05.406342 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406422 kubelet[2433]: I0416 23:54:05.406366 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406422 kubelet[2433]: I0416 23:54:05.406392 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406723 kubelet[2433]: I0416 23:54:05.406414 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a336a574033aef8c8c25cea2ab405ffd-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-84256b4514\" (UID: \"a336a574033aef8c8c25cea2ab405ffd\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406723 kubelet[2433]: I0416 23:54:05.406436 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406723 kubelet[2433]: I0416 23:54:05.406458 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.406723 kubelet[2433]: I0416 23:54:05.406479 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.486897 kubelet[2433]: I0416 23:54:05.486852 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.487429 kubelet[2433]: E0416 23:54:05.487376 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.47.3:6443/api/v1/nodes\": dial tcp 77.42.47.3:6443: connect: connection refused" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.582363 containerd[1628]: time="2026-04-16T23:54:05.581621796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-84256b4514,Uid:a336a574033aef8c8c25cea2ab405ffd,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:05.591609 containerd[1628]: time="2026-04-16T23:54:05.591569951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-84256b4514,Uid:d75076f2849ae4ac692cad8d2610b9e9,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:05.598291 containerd[1628]: time="2026-04-16T23:54:05.598252699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-84256b4514,Uid:123a18964d067348943fd850169b7183,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:05.713096 kubelet[2433]: E0416 23:54:05.712908 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.47.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-84256b4514?timeout=10s\": dial tcp 77.42.47.3:6443: connect: connection refused" interval="800ms" Apr 16 23:54:05.890673 kubelet[2433]: I0416 23:54:05.890600 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:05.891134 kubelet[2433]: E0416 23:54:05.891082 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.47.3:6443/api/v1/nodes\": dial tcp 77.42.47.3:6443: connect: connection refused" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:06.094068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086649806.mount: Deactivated successfully. Apr 16 23:54:06.105244 containerd[1628]: time="2026-04-16T23:54:06.105110337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:54:06.108170 containerd[1628]: time="2026-04-16T23:54:06.108112416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 16 23:54:06.112373 containerd[1628]: time="2026-04-16T23:54:06.112259354Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:54:06.113617 containerd[1628]: time="2026-04-16T23:54:06.113540844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:54:06.115874 containerd[1628]: time="2026-04-16T23:54:06.115757323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:54:06.116888 containerd[1628]: time="2026-04-16T23:54:06.116824013Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:54:06.118380 containerd[1628]: time="2026-04-16T23:54:06.117736992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:54:06.119195 containerd[1628]: time="2026-04-16T23:54:06.119160892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:54:06.119554 containerd[1628]: time="2026-04-16T23:54:06.119517651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 535.631506ms" Apr 16 23:54:06.123033 containerd[1628]: time="2026-04-16T23:54:06.122977590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.405922ms" Apr 16 23:54:06.147338 containerd[1628]: time="2026-04-16T23:54:06.146869090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 553.750539ms" Apr 16 23:54:06.153676 containerd[1628]: time="2026-04-16T23:54:06.153638747Z" level=info msg="connecting to shim 484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c" address="unix:///run/containerd/s/481b65552310bc0d01ed0b47ec6961d1a3ae9a6aa6358aa13e334c6c8caa26d5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:06.169544 containerd[1628]: time="2026-04-16T23:54:06.169487701Z" level=info msg="connecting to shim ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e" address="unix:///run/containerd/s/2de69cc2516af0ffaad97cd847aa9c9b17eefcf437bb2ab023f8da448c74053d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:06.184068 containerd[1628]: time="2026-04-16T23:54:06.183416505Z" level=info msg="connecting to shim 885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b" address="unix:///run/containerd/s/55b04a2fae91568945ff364e0028d8b7d0f59c86a049fc2392af111995120d9b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:06.189618 systemd[1]: Started cri-containerd-484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c.scope - libcontainer container 484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c. Apr 16 23:54:06.198466 systemd[1]: Started cri-containerd-ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e.scope - libcontainer container ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e. Apr 16 23:54:06.220450 systemd[1]: Started cri-containerd-885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b.scope - libcontainer container 885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b. Apr 16 23:54:06.262061 containerd[1628]: time="2026-04-16T23:54:06.262031372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-84256b4514,Uid:a336a574033aef8c8c25cea2ab405ffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c\"" Apr 16 23:54:06.269917 containerd[1628]: time="2026-04-16T23:54:06.269878869Z" level=info msg="CreateContainer within sandbox \"484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:54:06.272834 containerd[1628]: time="2026-04-16T23:54:06.272774108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-84256b4514,Uid:123a18964d067348943fd850169b7183,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e\"" Apr 16 23:54:06.274874 kubelet[2433]: E0416 23:54:06.274826 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.47.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:54:06.278951 containerd[1628]: time="2026-04-16T23:54:06.278910635Z" level=info msg="CreateContainer within sandbox \"ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:54:06.283429 containerd[1628]: time="2026-04-16T23:54:06.283355003Z" level=info msg="Container f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:06.290499 containerd[1628]: time="2026-04-16T23:54:06.290462260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-84256b4514,Uid:d75076f2849ae4ac692cad8d2610b9e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b\"" Apr 16 23:54:06.294938 containerd[1628]: time="2026-04-16T23:54:06.294882138Z" level=info msg="Container ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:06.296181 containerd[1628]: time="2026-04-16T23:54:06.296150028Z" level=info msg="CreateContainer within sandbox \"885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:54:06.297227 containerd[1628]: time="2026-04-16T23:54:06.296803968Z" level=info msg="CreateContainer within sandbox \"484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a\"" Apr 16 23:54:06.297393 containerd[1628]: time="2026-04-16T23:54:06.297332977Z" level=info msg="StartContainer for \"f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a\"" Apr 16 23:54:06.298453 containerd[1628]: time="2026-04-16T23:54:06.298420647Z" level=info msg="connecting to shim f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a" address="unix:///run/containerd/s/481b65552310bc0d01ed0b47ec6961d1a3ae9a6aa6358aa13e334c6c8caa26d5" protocol=ttrpc version=3 Apr 16 23:54:06.302712 containerd[1628]: time="2026-04-16T23:54:06.302673225Z" level=info msg="CreateContainer within sandbox \"ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5\"" Apr 16 23:54:06.303210 containerd[1628]: time="2026-04-16T23:54:06.303149635Z" level=info msg="StartContainer for \"ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5\"" Apr 16 23:54:06.304240 containerd[1628]: time="2026-04-16T23:54:06.304202194Z" level=info msg="connecting to shim ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5" address="unix:///run/containerd/s/2de69cc2516af0ffaad97cd847aa9c9b17eefcf437bb2ab023f8da448c74053d" protocol=ttrpc version=3 Apr 16 23:54:06.310204 containerd[1628]: time="2026-04-16T23:54:06.310150012Z" level=info msg="Container d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:06.321598 containerd[1628]: time="2026-04-16T23:54:06.321541137Z" level=info msg="CreateContainer within sandbox \"885caac0430d55b783f873d89267b274e8e79b680356e4fdd2b0751edee84b9b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5\"" Apr 16 23:54:06.323143 containerd[1628]: time="2026-04-16T23:54:06.322460917Z" level=info msg="StartContainer for \"d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5\"" Apr 16 23:54:06.322486 systemd[1]: Started cri-containerd-f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a.scope - libcontainer container f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a. Apr 16 23:54:06.323646 containerd[1628]: time="2026-04-16T23:54:06.323472736Z" level=info msg="connecting to shim d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5" address="unix:///run/containerd/s/55b04a2fae91568945ff364e0028d8b7d0f59c86a049fc2392af111995120d9b" protocol=ttrpc version=3 Apr 16 23:54:06.334456 systemd[1]: Started cri-containerd-ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5.scope - libcontainer container ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5. Apr 16 23:54:06.343432 systemd[1]: Started cri-containerd-d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5.scope - libcontainer container d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5. Apr 16 23:54:06.396204 containerd[1628]: time="2026-04-16T23:54:06.395397856Z" level=info msg="StartContainer for \"f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a\" returns successfully" Apr 16 23:54:06.412088 containerd[1628]: time="2026-04-16T23:54:06.412043560Z" level=info msg="StartContainer for \"ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5\" returns successfully" Apr 16 23:54:06.434463 containerd[1628]: time="2026-04-16T23:54:06.434396500Z" level=info msg="StartContainer for \"d59a193a871221d68ad55d537093a14e82b723404e819b785a38d25d11a3e8e5\" returns successfully" Apr 16 23:54:06.437172 kubelet[2433]: E0416 23:54:06.437130 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.47.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-84256b4514&limit=500&resourceVersion=0\": dial tcp 77.42.47.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:54:06.694077 kubelet[2433]: I0416 23:54:06.693752 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.162769 kubelet[2433]: E0416 23:54:07.162733 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.166971 kubelet[2433]: E0416 23:54:07.166767 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.167995 kubelet[2433]: E0416 23:54:07.167971 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.291899 kubelet[2433]: E0416 23:54:07.291852 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.449349 kubelet[2433]: I0416 23:54:07.449178 2433 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:07.449349 kubelet[2433]: E0416 23:54:07.449216 2433 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-84256b4514\": node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.459757 kubelet[2433]: E0416 23:54:07.459721 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.560444 kubelet[2433]: E0416 23:54:07.560361 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.660988 kubelet[2433]: E0416 23:54:07.660923 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.762129 kubelet[2433]: E0416 23:54:07.761926 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.862960 kubelet[2433]: E0416 23:54:07.862881 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:07.963130 kubelet[2433]: E0416 23:54:07.963066 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.064153 kubelet[2433]: E0416 23:54:08.064082 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.164302 kubelet[2433]: E0416 23:54:08.164240 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.171708 kubelet[2433]: E0416 23:54:08.171651 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:08.172658 kubelet[2433]: E0416 23:54:08.172593 2433 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-84256b4514\" not found" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:08.265367 kubelet[2433]: E0416 23:54:08.265276 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.366408 kubelet[2433]: E0416 23:54:08.366215 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.466683 kubelet[2433]: E0416 23:54:08.466622 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.566871 kubelet[2433]: E0416 23:54:08.566793 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.667707 kubelet[2433]: E0416 23:54:08.667444 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.768182 kubelet[2433]: E0416 23:54:08.767995 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.869020 kubelet[2433]: E0416 23:54:08.868977 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:08.970590 kubelet[2433]: E0416 23:54:08.970432 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:09.071433 kubelet[2433]: E0416 23:54:09.071350 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:09.171538 kubelet[2433]: E0416 23:54:09.171477 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-84256b4514\" not found" Apr 16 23:54:09.204468 kubelet[2433]: I0416 23:54:09.204392 2433 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:09.214971 kubelet[2433]: I0416 23:54:09.214629 2433 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:09.223308 kubelet[2433]: I0416 23:54:09.220687 2433 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:09.605347 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-9.scope)... Apr 16 23:54:09.605373 systemd[1]: Reloading... Apr 16 23:54:09.719389 zram_generator::config[2762]: No configuration found. Apr 16 23:54:09.893417 systemd[1]: Reloading finished in 287 ms. Apr 16 23:54:09.921388 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:54:09.937871 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:54:09.938111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:09.938161 systemd[1]: kubelet.service: Consumed 807ms CPU time, 124.5M memory peak. Apr 16 23:54:09.940134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:54:10.096502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:54:10.104614 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:54:10.136576 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:54:10.136576 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:54:10.136890 kubelet[2815]: I0416 23:54:10.136580 2815 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:54:10.141634 kubelet[2815]: I0416 23:54:10.141587 2815 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:54:10.141634 kubelet[2815]: I0416 23:54:10.141600 2815 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:54:10.141634 kubelet[2815]: I0416 23:54:10.141621 2815 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:54:10.141634 kubelet[2815]: I0416 23:54:10.141626 2815 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:54:10.141992 kubelet[2815]: I0416 23:54:10.141734 2815 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:54:10.142611 kubelet[2815]: I0416 23:54:10.142553 2815 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:54:10.144341 kubelet[2815]: I0416 23:54:10.143967 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:54:10.146795 kubelet[2815]: I0416 23:54:10.146774 2815 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:54:10.150252 kubelet[2815]: I0416 23:54:10.150121 2815 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:54:10.150352 kubelet[2815]: I0416 23:54:10.150322 2815 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:54:10.150442 kubelet[2815]: I0416 23:54:10.150345 2815 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-84256b4514","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:54:10.150442 kubelet[2815]: I0416 23:54:10.150436 2815 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:54:10.150442 kubelet[2815]: I0416 23:54:10.150442 2815 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:54:10.150545 kubelet[2815]: I0416 23:54:10.150459 2815 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:54:10.150604 kubelet[2815]: I0416 23:54:10.150589 2815 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:54:10.151093 kubelet[2815]: I0416 23:54:10.150716 2815 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:54:10.151093 kubelet[2815]: I0416 23:54:10.150729 2815 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:54:10.151093 kubelet[2815]: I0416 23:54:10.150744 2815 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:54:10.151093 kubelet[2815]: I0416 23:54:10.150752 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:54:10.154982 kubelet[2815]: I0416 23:54:10.154943 2815 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:54:10.155432 kubelet[2815]: I0416 23:54:10.155411 2815 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:54:10.155497 kubelet[2815]: I0416 23:54:10.155487 2815 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:54:10.159893 kubelet[2815]: I0416 23:54:10.159746 2815 server.go:1262] "Started kubelet" Apr 16 23:54:10.161308 kubelet[2815]: I0416 23:54:10.161297 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:54:10.163401 kubelet[2815]: I0416 23:54:10.163364 2815 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:54:10.163982 kubelet[2815]: I0416 23:54:10.163967 2815 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:54:10.167720 kubelet[2815]: I0416 23:54:10.167629 2815 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:54:10.167720 kubelet[2815]: I0416 23:54:10.167676 2815 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:54:10.167830 kubelet[2815]: I0416 23:54:10.167802 2815 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:54:10.172494 kubelet[2815]: I0416 23:54:10.172188 2815 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:54:10.172494 kubelet[2815]: I0416 23:54:10.172294 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:54:10.174343 kubelet[2815]: I0416 23:54:10.174332 2815 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:54:10.174493 kubelet[2815]: I0416 23:54:10.174485 2815 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:54:10.176712 kubelet[2815]: I0416 23:54:10.176624 2815 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:54:10.176712 kubelet[2815]: I0416 23:54:10.176695 2815 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:54:10.178038 kubelet[2815]: I0416 23:54:10.177992 2815 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:54:10.179428 kubelet[2815]: E0416 23:54:10.179353 2815 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:54:10.193847 kubelet[2815]: I0416 23:54:10.193822 2815 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:54:10.195332 kubelet[2815]: I0416 23:54:10.195111 2815 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:54:10.195332 kubelet[2815]: I0416 23:54:10.195124 2815 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:54:10.195332 kubelet[2815]: I0416 23:54:10.195142 2815 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:54:10.195332 kubelet[2815]: E0416 23:54:10.195174 2815 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:54:10.227921 kubelet[2815]: I0416 23:54:10.227900 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:54:10.228067 kubelet[2815]: I0416 23:54:10.228058 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:54:10.228106 kubelet[2815]: I0416 23:54:10.228100 2815 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:54:10.228233 kubelet[2815]: I0416 23:54:10.228224 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:54:10.228276 kubelet[2815]: I0416 23:54:10.228263 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:54:10.228305 kubelet[2815]: I0416 23:54:10.228300 2815 policy_none.go:49] "None policy: Start" Apr 16 23:54:10.228925 kubelet[2815]: I0416 23:54:10.228363 2815 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:54:10.228925 kubelet[2815]: I0416 23:54:10.228374 2815 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:54:10.228925 kubelet[2815]: I0416 23:54:10.228441 2815 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 23:54:10.228925 kubelet[2815]: I0416 23:54:10.228447 2815 policy_none.go:47] "Start" Apr 16 23:54:10.232608 kubelet[2815]: E0416 23:54:10.232593 2815 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:54:10.233428 kubelet[2815]: I0416 23:54:10.233420 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:54:10.233510 kubelet[2815]: I0416 23:54:10.233476 2815 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:54:10.233745 kubelet[2815]: I0416 23:54:10.233721 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:54:10.236108 kubelet[2815]: E0416 23:54:10.236094 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:54:10.296459 kubelet[2815]: I0416 23:54:10.296430 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.296707 kubelet[2815]: I0416 23:54:10.296469 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.296826 kubelet[2815]: I0416 23:54:10.296526 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.304885 kubelet[2815]: E0416 23:54:10.304839 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-84256b4514\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.305377 kubelet[2815]: E0416 23:54:10.305343 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-84256b4514\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.305377 kubelet[2815]: E0416 23:54:10.305361 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.339781 kubelet[2815]: I0416 23:54:10.339716 2815 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.348403 kubelet[2815]: I0416 23:54:10.348350 2815 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.348523 kubelet[2815]: I0416 23:54:10.348443 2815 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476045 kubelet[2815]: I0416 23:54:10.475924 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476045 kubelet[2815]: I0416 23:54:10.475987 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476045 kubelet[2815]: I0416 23:54:10.476011 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476045 kubelet[2815]: I0416 23:54:10.476034 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476286 kubelet[2815]: I0416 23:54:10.476059 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476286 kubelet[2815]: I0416 23:54:10.476084 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476286 kubelet[2815]: I0416 23:54:10.476107 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a336a574033aef8c8c25cea2ab405ffd-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-84256b4514\" (UID: \"a336a574033aef8c8c25cea2ab405ffd\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476286 kubelet[2815]: I0416 23:54:10.476128 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d75076f2849ae4ac692cad8d2610b9e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-84256b4514\" (UID: \"d75076f2849ae4ac692cad8d2610b9e9\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:10.476286 kubelet[2815]: I0416 23:54:10.476149 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/123a18964d067348943fd850169b7183-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-84256b4514\" (UID: \"123a18964d067348943fd850169b7183\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" Apr 16 23:54:11.152848 kubelet[2815]: I0416 23:54:11.152777 2815 apiserver.go:52] "Watching apiserver" Apr 16 23:54:11.175503 kubelet[2815]: I0416 23:54:11.175442 2815 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:54:11.217989 kubelet[2815]: I0416 23:54:11.217676 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:11.218860 kubelet[2815]: I0416 23:54:11.218664 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:11.245000 kubelet[2815]: E0416 23:54:11.242765 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-84256b4514\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" Apr 16 23:54:11.248371 kubelet[2815]: E0416 23:54:11.243344 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-84256b4514\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" Apr 16 23:54:11.266564 kubelet[2815]: I0416 23:54:11.266500 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-84256b4514" podStartSLOduration=2.2664690370000002 podStartE2EDuration="2.266469037s" podCreationTimestamp="2026-04-16 23:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:54:11.265866797 +0000 UTC m=+1.158310908" watchObservedRunningTime="2026-04-16 23:54:11.266469037 +0000 UTC m=+1.158913148" Apr 16 23:54:11.277331 kubelet[2815]: I0416 23:54:11.276889 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-84256b4514" podStartSLOduration=2.276867473 podStartE2EDuration="2.276867473s" podCreationTimestamp="2026-04-16 23:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:54:11.276582913 +0000 UTC m=+1.169027074" watchObservedRunningTime="2026-04-16 23:54:11.276867473 +0000 UTC m=+1.169311614" Apr 16 23:54:11.296248 kubelet[2815]: I0416 23:54:11.296190 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-84256b4514" podStartSLOduration=2.296177654 podStartE2EDuration="2.296177654s" podCreationTimestamp="2026-04-16 23:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:54:11.287268438 +0000 UTC m=+1.179712559" watchObservedRunningTime="2026-04-16 23:54:11.296177654 +0000 UTC m=+1.188621765" Apr 16 23:54:15.779085 kubelet[2815]: I0416 23:54:15.779047 2815 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:54:15.779979 containerd[1628]: time="2026-04-16T23:54:15.779897156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:54:15.780548 kubelet[2815]: I0416 23:54:15.780288 2815 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:54:16.745591 systemd[1]: Created slice kubepods-besteffort-podc4e001ba_9840_4658_89d8_ce7e12b35f0a.slice - libcontainer container kubepods-besteffort-podc4e001ba_9840_4658_89d8_ce7e12b35f0a.slice. Apr 16 23:54:16.813539 kubelet[2815]: I0416 23:54:16.813489 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4e001ba-9840-4658-89d8-ce7e12b35f0a-lib-modules\") pod \"kube-proxy-v26ck\" (UID: \"c4e001ba-9840-4658-89d8-ce7e12b35f0a\") " pod="kube-system/kube-proxy-v26ck" Apr 16 23:54:16.813898 kubelet[2815]: I0416 23:54:16.813555 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4e001ba-9840-4658-89d8-ce7e12b35f0a-kube-proxy\") pod \"kube-proxy-v26ck\" (UID: \"c4e001ba-9840-4658-89d8-ce7e12b35f0a\") " pod="kube-system/kube-proxy-v26ck" Apr 16 23:54:16.813898 kubelet[2815]: I0416 23:54:16.813585 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4e001ba-9840-4658-89d8-ce7e12b35f0a-xtables-lock\") pod \"kube-proxy-v26ck\" (UID: \"c4e001ba-9840-4658-89d8-ce7e12b35f0a\") " pod="kube-system/kube-proxy-v26ck" Apr 16 23:54:16.813898 kubelet[2815]: I0416 23:54:16.813598 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shg7b\" (UniqueName: \"kubernetes.io/projected/c4e001ba-9840-4658-89d8-ce7e12b35f0a-kube-api-access-shg7b\") pod \"kube-proxy-v26ck\" (UID: \"c4e001ba-9840-4658-89d8-ce7e12b35f0a\") " pod="kube-system/kube-proxy-v26ck" Apr 16 23:54:16.996232 systemd[1]: Created slice kubepods-besteffort-pod606c6a73_47f4_404b_bb3f_a0206049e643.slice - libcontainer container kubepods-besteffort-pod606c6a73_47f4_404b_bb3f_a0206049e643.slice. Apr 16 23:54:17.015364 kubelet[2815]: I0416 23:54:17.015221 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6672v\" (UniqueName: \"kubernetes.io/projected/606c6a73-47f4-404b-bb3f-a0206049e643-kube-api-access-6672v\") pod \"tigera-operator-5588576f44-v6tqt\" (UID: \"606c6a73-47f4-404b-bb3f-a0206049e643\") " pod="tigera-operator/tigera-operator-5588576f44-v6tqt" Apr 16 23:54:17.015364 kubelet[2815]: I0416 23:54:17.015352 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/606c6a73-47f4-404b-bb3f-a0206049e643-var-lib-calico\") pod \"tigera-operator-5588576f44-v6tqt\" (UID: \"606c6a73-47f4-404b-bb3f-a0206049e643\") " pod="tigera-operator/tigera-operator-5588576f44-v6tqt" Apr 16 23:54:17.057677 containerd[1628]: time="2026-04-16T23:54:17.057616334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v26ck,Uid:c4e001ba-9840-4658-89d8-ce7e12b35f0a,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:17.084181 containerd[1628]: time="2026-04-16T23:54:17.084112773Z" level=info msg="connecting to shim 426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854" address="unix:///run/containerd/s/238890f82c1891ed759251e01299711b6ef8fe09ce16860b5be0b6faf10d4d95" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:17.110441 systemd[1]: Started cri-containerd-426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854.scope - libcontainer container 426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854. Apr 16 23:54:17.138589 containerd[1628]: time="2026-04-16T23:54:17.138560560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v26ck,Uid:c4e001ba-9840-4658-89d8-ce7e12b35f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854\"" Apr 16 23:54:17.143362 containerd[1628]: time="2026-04-16T23:54:17.143339628Z" level=info msg="CreateContainer within sandbox \"426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:54:17.152733 containerd[1628]: time="2026-04-16T23:54:17.152712034Z" level=info msg="Container cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:17.157593 containerd[1628]: time="2026-04-16T23:54:17.157564102Z" level=info msg="CreateContainer within sandbox \"426c724c72ff948c367dd094d78a431f46adedd826107aa9d7214a5215343854\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b\"" Apr 16 23:54:17.158824 containerd[1628]: time="2026-04-16T23:54:17.158803592Z" level=info msg="StartContainer for \"cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b\"" Apr 16 23:54:17.159780 containerd[1628]: time="2026-04-16T23:54:17.159741401Z" level=info msg="connecting to shim cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b" address="unix:///run/containerd/s/238890f82c1891ed759251e01299711b6ef8fe09ce16860b5be0b6faf10d4d95" protocol=ttrpc version=3 Apr 16 23:54:17.176426 systemd[1]: Started cri-containerd-cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b.scope - libcontainer container cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b. Apr 16 23:54:17.227826 containerd[1628]: time="2026-04-16T23:54:17.227306933Z" level=info msg="StartContainer for \"cf7ceadffd7d0ac8f53397d6c313c187796ec5afa31bfbce9da7a87ea3556a4b\" returns successfully" Apr 16 23:54:17.242413 kubelet[2815]: I0416 23:54:17.242354 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v26ck" podStartSLOduration=1.242338327 podStartE2EDuration="1.242338327s" podCreationTimestamp="2026-04-16 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:54:17.241660517 +0000 UTC m=+7.134104628" watchObservedRunningTime="2026-04-16 23:54:17.242338327 +0000 UTC m=+7.134782448" Apr 16 23:54:17.301644 containerd[1628]: time="2026-04-16T23:54:17.301584572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-v6tqt,Uid:606c6a73-47f4-404b-bb3f-a0206049e643,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:54:17.317906 containerd[1628]: time="2026-04-16T23:54:17.317824545Z" level=info msg="connecting to shim fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7" address="unix:///run/containerd/s/84aa3272d1e44a2a99fea56e17357720bba755f1d0f5afdf3b0614bc9b9f60d1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:17.338449 systemd[1]: Started cri-containerd-fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7.scope - libcontainer container fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7. Apr 16 23:54:17.379537 containerd[1628]: time="2026-04-16T23:54:17.379500840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-v6tqt,Uid:606c6a73-47f4-404b-bb3f-a0206049e643,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7\"" Apr 16 23:54:17.381420 containerd[1628]: time="2026-04-16T23:54:17.381232139Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:54:17.936036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1335045715.mount: Deactivated successfully. Apr 16 23:54:19.113643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246742540.mount: Deactivated successfully. Apr 16 23:54:19.817873 containerd[1628]: time="2026-04-16T23:54:19.817820574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:19.819198 containerd[1628]: time="2026-04-16T23:54:19.819091233Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 23:54:19.820001 containerd[1628]: time="2026-04-16T23:54:19.819969913Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:19.821693 containerd[1628]: time="2026-04-16T23:54:19.821666922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:19.822157 containerd[1628]: time="2026-04-16T23:54:19.822131622Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.440875973s" Apr 16 23:54:19.822211 containerd[1628]: time="2026-04-16T23:54:19.822200962Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 23:54:19.826276 containerd[1628]: time="2026-04-16T23:54:19.826074840Z" level=info msg="CreateContainer within sandbox \"fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:54:19.833827 containerd[1628]: time="2026-04-16T23:54:19.833466977Z" level=info msg="Container d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:19.835109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135095680.mount: Deactivated successfully. Apr 16 23:54:19.841028 containerd[1628]: time="2026-04-16T23:54:19.840957594Z" level=info msg="CreateContainer within sandbox \"fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\"" Apr 16 23:54:19.842813 containerd[1628]: time="2026-04-16T23:54:19.841400224Z" level=info msg="StartContainer for \"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\"" Apr 16 23:54:19.842813 containerd[1628]: time="2026-04-16T23:54:19.842023344Z" level=info msg="connecting to shim d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996" address="unix:///run/containerd/s/84aa3272d1e44a2a99fea56e17357720bba755f1d0f5afdf3b0614bc9b9f60d1" protocol=ttrpc version=3 Apr 16 23:54:19.863439 systemd[1]: Started cri-containerd-d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996.scope - libcontainer container d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996. Apr 16 23:54:19.891653 containerd[1628]: time="2026-04-16T23:54:19.891585403Z" level=info msg="StartContainer for \"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\" returns successfully" Apr 16 23:54:20.253895 kubelet[2815]: I0416 23:54:20.253711 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-v6tqt" podStartSLOduration=1.811689109 podStartE2EDuration="4.253695842s" podCreationTimestamp="2026-04-16 23:54:16 +0000 UTC" firstStartedPulling="2026-04-16 23:54:17.380940469 +0000 UTC m=+7.273384580" lastFinishedPulling="2026-04-16 23:54:19.822947202 +0000 UTC m=+9.715391313" observedRunningTime="2026-04-16 23:54:20.253455392 +0000 UTC m=+10.145899513" watchObservedRunningTime="2026-04-16 23:54:20.253695842 +0000 UTC m=+10.146139953" Apr 16 23:54:22.392882 update_engine[1602]: I20260416 23:54:22.392350 1602 update_attempter.cc:509] Updating boot flags... Apr 16 23:54:24.921214 sudo[1875]: pam_unix(sudo:session): session closed for user root Apr 16 23:54:24.952339 sshd[1874]: Connection closed by 4.175.71.9 port 51952 Apr 16 23:54:24.952845 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Apr 16 23:54:24.959191 systemd[1]: sshd@8-77.42.47.3:22-4.175.71.9:51952.service: Deactivated successfully. Apr 16 23:54:24.959681 systemd-logind[1601]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:54:24.963057 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:54:24.963774 systemd[1]: session-9.scope: Consumed 3.759s CPU time, 228.9M memory peak. Apr 16 23:54:24.968659 systemd-logind[1601]: Removed session 9. Apr 16 23:54:26.721161 systemd[1]: Created slice kubepods-besteffort-pod64bf1381_8b48_452d_b0b5_6d3f94204876.slice - libcontainer container kubepods-besteffort-pod64bf1381_8b48_452d_b0b5_6d3f94204876.slice. Apr 16 23:54:26.771516 kubelet[2815]: I0416 23:54:26.771474 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9nn\" (UniqueName: \"kubernetes.io/projected/64bf1381-8b48-452d-b0b5-6d3f94204876-kube-api-access-dq9nn\") pod \"calico-typha-5488f574cb-6zvpb\" (UID: \"64bf1381-8b48-452d-b0b5-6d3f94204876\") " pod="calico-system/calico-typha-5488f574cb-6zvpb" Apr 16 23:54:26.771991 kubelet[2815]: I0416 23:54:26.771915 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bf1381-8b48-452d-b0b5-6d3f94204876-tigera-ca-bundle\") pod \"calico-typha-5488f574cb-6zvpb\" (UID: \"64bf1381-8b48-452d-b0b5-6d3f94204876\") " pod="calico-system/calico-typha-5488f574cb-6zvpb" Apr 16 23:54:26.771991 kubelet[2815]: I0416 23:54:26.771940 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/64bf1381-8b48-452d-b0b5-6d3f94204876-typha-certs\") pod \"calico-typha-5488f574cb-6zvpb\" (UID: \"64bf1381-8b48-452d-b0b5-6d3f94204876\") " pod="calico-system/calico-typha-5488f574cb-6zvpb" Apr 16 23:54:26.793177 systemd[1]: Created slice kubepods-besteffort-pod98372185_bb89_481e_a138_587c49c37831.slice - libcontainer container kubepods-besteffort-pod98372185_bb89_481e_a138_587c49c37831.slice. Apr 16 23:54:26.901192 kubelet[2815]: E0416 23:54:26.901149 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:26.973405 kubelet[2815]: I0416 23:54:26.972971 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-lib-modules\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.973792 kubelet[2815]: I0416 23:54:26.973673 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-xtables-lock\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974013 kubelet[2815]: I0416 23:54:26.973696 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-cni-log-dir\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974013 kubelet[2815]: I0416 23:54:26.973866 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-var-lib-calico\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974013 kubelet[2815]: I0416 23:54:26.973878 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsnh\" (UniqueName: \"kubernetes.io/projected/98372185-bb89-481e-a138-587c49c37831-kube-api-access-kcsnh\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974013 kubelet[2815]: I0416 23:54:26.973893 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-bpffs\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974286 kubelet[2815]: I0416 23:54:26.974229 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-var-run-calico\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974286 kubelet[2815]: I0416 23:54:26.974244 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-cni-bin-dir\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974286 kubelet[2815]: I0416 23:54:26.974257 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-policysync\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974286 kubelet[2815]: I0416 23:54:26.974268 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98372185-bb89-481e-a138-587c49c37831-node-certs\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974797 kubelet[2815]: I0416 23:54:26.974716 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-nodeproc\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974797 kubelet[2815]: I0416 23:54:26.974733 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-cni-net-dir\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974797 kubelet[2815]: I0416 23:54:26.974741 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-flexvol-driver-host\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.974797 kubelet[2815]: I0416 23:54:26.974752 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/98372185-bb89-481e-a138-587c49c37831-sys-fs\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:26.975003 kubelet[2815]: I0416 23:54:26.974883 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98372185-bb89-481e-a138-587c49c37831-tigera-ca-bundle\") pod \"calico-node-5crjp\" (UID: \"98372185-bb89-481e-a138-587c49c37831\") " pod="calico-system/calico-node-5crjp" Apr 16 23:54:27.026917 containerd[1628]: time="2026-04-16T23:54:27.026863343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5488f574cb-6zvpb,Uid:64bf1381-8b48-452d-b0b5-6d3f94204876,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:27.045523 containerd[1628]: time="2026-04-16T23:54:27.045368566Z" level=info msg="connecting to shim 10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40" address="unix:///run/containerd/s/cbbe0d3ac33cd0f3754ed95b6174b35eb5129ebb67acfb54d4c78750104550a3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:27.064425 systemd[1]: Started cri-containerd-10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40.scope - libcontainer container 10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40. Apr 16 23:54:27.075505 kubelet[2815]: I0416 23:54:27.075473 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99b3436d-dcd4-48c2-849e-f8b1845df2b4-registration-dir\") pod \"csi-node-driver-tscd9\" (UID: \"99b3436d-dcd4-48c2-849e-f8b1845df2b4\") " pod="calico-system/csi-node-driver-tscd9" Apr 16 23:54:27.075617 kubelet[2815]: I0416 23:54:27.075503 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99b3436d-dcd4-48c2-849e-f8b1845df2b4-varrun\") pod \"csi-node-driver-tscd9\" (UID: \"99b3436d-dcd4-48c2-849e-f8b1845df2b4\") " pod="calico-system/csi-node-driver-tscd9" Apr 16 23:54:27.075617 kubelet[2815]: I0416 23:54:27.075540 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxf6\" (UniqueName: \"kubernetes.io/projected/99b3436d-dcd4-48c2-849e-f8b1845df2b4-kube-api-access-8gxf6\") pod \"csi-node-driver-tscd9\" (UID: \"99b3436d-dcd4-48c2-849e-f8b1845df2b4\") " pod="calico-system/csi-node-driver-tscd9" Apr 16 23:54:27.075617 kubelet[2815]: I0416 23:54:27.075602 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99b3436d-dcd4-48c2-849e-f8b1845df2b4-kubelet-dir\") pod \"csi-node-driver-tscd9\" (UID: \"99b3436d-dcd4-48c2-849e-f8b1845df2b4\") " pod="calico-system/csi-node-driver-tscd9" Apr 16 23:54:27.075617 kubelet[2815]: I0416 23:54:27.075612 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99b3436d-dcd4-48c2-849e-f8b1845df2b4-socket-dir\") pod \"csi-node-driver-tscd9\" (UID: \"99b3436d-dcd4-48c2-849e-f8b1845df2b4\") " pod="calico-system/csi-node-driver-tscd9" Apr 16 23:54:27.081875 kubelet[2815]: E0416 23:54:27.081852 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.081875 kubelet[2815]: W0416 23:54:27.081869 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.081998 kubelet[2815]: E0416 23:54:27.081898 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.089178 kubelet[2815]: E0416 23:54:27.089129 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.089178 kubelet[2815]: W0416 23:54:27.089141 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.089178 kubelet[2815]: E0416 23:54:27.089154 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.099137 containerd[1628]: time="2026-04-16T23:54:27.099109685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5crjp,Uid:98372185-bb89-481e-a138-587c49c37831,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:27.112774 containerd[1628]: time="2026-04-16T23:54:27.112745360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5488f574cb-6zvpb,Uid:64bf1381-8b48-452d-b0b5-6d3f94204876,Namespace:calico-system,Attempt:0,} returns sandbox id \"10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40\"" Apr 16 23:54:27.114342 containerd[1628]: time="2026-04-16T23:54:27.114296684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:54:27.122303 containerd[1628]: time="2026-04-16T23:54:27.122084675Z" level=info msg="connecting to shim 55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f" address="unix:///run/containerd/s/b2516eacac79bff9c202a7c59ea65d905b6ed65eeda7e6e4e35f07081388235d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:27.143441 systemd[1]: Started cri-containerd-55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f.scope - libcontainer container 55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f. Apr 16 23:54:27.167112 containerd[1628]: time="2026-04-16T23:54:27.167082745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5crjp,Uid:98372185-bb89-481e-a138-587c49c37831,Namespace:calico-system,Attempt:0,} returns sandbox id \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\"" Apr 16 23:54:27.176276 kubelet[2815]: E0416 23:54:27.176243 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.176276 kubelet[2815]: W0416 23:54:27.176267 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.176402 kubelet[2815]: E0416 23:54:27.176282 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.176533 kubelet[2815]: E0416 23:54:27.176518 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.176533 kubelet[2815]: W0416 23:54:27.176529 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.176569 kubelet[2815]: E0416 23:54:27.176536 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.176766 kubelet[2815]: E0416 23:54:27.176751 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.176766 kubelet[2815]: W0416 23:54:27.176762 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.176803 kubelet[2815]: E0416 23:54:27.176771 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.176987 kubelet[2815]: E0416 23:54:27.176973 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.176987 kubelet[2815]: W0416 23:54:27.176982 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.177023 kubelet[2815]: E0416 23:54:27.176988 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.177219 kubelet[2815]: E0416 23:54:27.177205 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.177219 kubelet[2815]: W0416 23:54:27.177215 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.177258 kubelet[2815]: E0416 23:54:27.177230 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.177492 kubelet[2815]: E0416 23:54:27.177478 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.177492 kubelet[2815]: W0416 23:54:27.177487 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.177530 kubelet[2815]: E0416 23:54:27.177493 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.177697 kubelet[2815]: E0416 23:54:27.177684 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.177697 kubelet[2815]: W0416 23:54:27.177692 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.177733 kubelet[2815]: E0416 23:54:27.177699 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.177908 kubelet[2815]: E0416 23:54:27.177893 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.177938 kubelet[2815]: W0416 23:54:27.177928 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.177963 kubelet[2815]: E0416 23:54:27.177939 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.178249 kubelet[2815]: E0416 23:54:27.178234 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.178249 kubelet[2815]: W0416 23:54:27.178244 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.178282 kubelet[2815]: E0416 23:54:27.178251 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.178471 kubelet[2815]: E0416 23:54:27.178457 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.178471 kubelet[2815]: W0416 23:54:27.178466 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.178507 kubelet[2815]: E0416 23:54:27.178472 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.178694 kubelet[2815]: E0416 23:54:27.178681 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.178694 kubelet[2815]: W0416 23:54:27.178689 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.178727 kubelet[2815]: E0416 23:54:27.178695 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.178909 kubelet[2815]: E0416 23:54:27.178895 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.178909 kubelet[2815]: W0416 23:54:27.178903 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.178954 kubelet[2815]: E0416 23:54:27.178910 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.179115 kubelet[2815]: E0416 23:54:27.179101 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.179115 kubelet[2815]: W0416 23:54:27.179110 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.179150 kubelet[2815]: E0416 23:54:27.179116 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.179308 kubelet[2815]: E0416 23:54:27.179294 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.179308 kubelet[2815]: W0416 23:54:27.179303 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.179370 kubelet[2815]: E0416 23:54:27.179332 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.179575 kubelet[2815]: E0416 23:54:27.179560 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.179575 kubelet[2815]: W0416 23:54:27.179570 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.179609 kubelet[2815]: E0416 23:54:27.179578 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.179831 kubelet[2815]: E0416 23:54:27.179816 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.179831 kubelet[2815]: W0416 23:54:27.179826 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.179873 kubelet[2815]: E0416 23:54:27.179850 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.180156 kubelet[2815]: E0416 23:54:27.180141 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.180156 kubelet[2815]: W0416 23:54:27.180150 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.180197 kubelet[2815]: E0416 23:54:27.180157 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.180580 kubelet[2815]: E0416 23:54:27.180536 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.180580 kubelet[2815]: W0416 23:54:27.180545 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.180580 kubelet[2815]: E0416 23:54:27.180552 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.181263 kubelet[2815]: E0416 23:54:27.181238 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.181263 kubelet[2815]: W0416 23:54:27.181254 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.181263 kubelet[2815]: E0416 23:54:27.181262 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.181500 kubelet[2815]: E0416 23:54:27.181484 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.181500 kubelet[2815]: W0416 23:54:27.181497 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.181541 kubelet[2815]: E0416 23:54:27.181504 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.182115 kubelet[2815]: E0416 23:54:27.181707 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.182115 kubelet[2815]: W0416 23:54:27.181716 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.182115 kubelet[2815]: E0416 23:54:27.181723 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.182115 kubelet[2815]: E0416 23:54:27.181900 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.182115 kubelet[2815]: W0416 23:54:27.181905 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.182115 kubelet[2815]: E0416 23:54:27.181911 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.182375 kubelet[2815]: E0416 23:54:27.182358 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.182375 kubelet[2815]: W0416 23:54:27.182370 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.182419 kubelet[2815]: E0416 23:54:27.182378 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.182770 kubelet[2815]: E0416 23:54:27.182752 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.182770 kubelet[2815]: W0416 23:54:27.182763 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.182770 kubelet[2815]: E0416 23:54:27.182770 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.182992 kubelet[2815]: E0416 23:54:27.182980 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.182992 kubelet[2815]: W0416 23:54:27.182990 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.183026 kubelet[2815]: E0416 23:54:27.182996 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:27.187006 kubelet[2815]: E0416 23:54:27.186988 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:27.187006 kubelet[2815]: W0416 23:54:27.186998 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:27.187006 kubelet[2815]: E0416 23:54:27.187006 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:28.968470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635625796.mount: Deactivated successfully. Apr 16 23:54:29.196251 kubelet[2815]: E0416 23:54:29.196167 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:29.755139 containerd[1628]: time="2026-04-16T23:54:29.755091611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:29.756330 containerd[1628]: time="2026-04-16T23:54:29.756299297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 23:54:29.757653 containerd[1628]: time="2026-04-16T23:54:29.757540401Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:29.759384 containerd[1628]: time="2026-04-16T23:54:29.759354024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:29.759989 containerd[1628]: time="2026-04-16T23:54:29.759710197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.645286486s" Apr 16 23:54:29.759989 containerd[1628]: time="2026-04-16T23:54:29.759734177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 23:54:29.760507 containerd[1628]: time="2026-04-16T23:54:29.760496252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:54:29.767801 containerd[1628]: time="2026-04-16T23:54:29.767777733Z" level=info msg="CreateContainer within sandbox \"10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:54:29.777029 containerd[1628]: time="2026-04-16T23:54:29.776485056Z" level=info msg="Container 2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:29.778768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729551816.mount: Deactivated successfully. Apr 16 23:54:29.787651 containerd[1628]: time="2026-04-16T23:54:29.787627770Z" level=info msg="CreateContainer within sandbox \"10b5b64b814999e690a006c7f4287bfdf2a3b241241aef12a8426c80a882df40\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0\"" Apr 16 23:54:29.788195 containerd[1628]: time="2026-04-16T23:54:29.788170888Z" level=info msg="StartContainer for \"2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0\"" Apr 16 23:54:29.789537 containerd[1628]: time="2026-04-16T23:54:29.789517831Z" level=info msg="connecting to shim 2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0" address="unix:///run/containerd/s/cbbe0d3ac33cd0f3754ed95b6174b35eb5129ebb67acfb54d4c78750104550a3" protocol=ttrpc version=3 Apr 16 23:54:29.804412 systemd[1]: Started cri-containerd-2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0.scope - libcontainer container 2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0. Apr 16 23:54:29.843275 containerd[1628]: time="2026-04-16T23:54:29.843147110Z" level=info msg="StartContainer for \"2ab88b1fa822fffaa62d28b7707eb392a1eaf56ffcfe86560d405fdfe0f989c0\" returns successfully" Apr 16 23:54:30.295803 kubelet[2815]: E0416 23:54:30.295639 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.295803 kubelet[2815]: W0416 23:54:30.295782 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.295803 kubelet[2815]: E0416 23:54:30.295803 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.296771 kubelet[2815]: E0416 23:54:30.296708 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.296771 kubelet[2815]: W0416 23:54:30.296727 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.296771 kubelet[2815]: E0416 23:54:30.296740 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.297173 kubelet[2815]: E0416 23:54:30.297090 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.297173 kubelet[2815]: W0416 23:54:30.297101 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.297173 kubelet[2815]: E0416 23:54:30.297114 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.297559 kubelet[2815]: E0416 23:54:30.297509 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.297559 kubelet[2815]: W0416 23:54:30.297521 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.297559 kubelet[2815]: E0416 23:54:30.297534 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.298541 kubelet[2815]: E0416 23:54:30.298519 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.298541 kubelet[2815]: W0416 23:54:30.298534 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.298634 kubelet[2815]: E0416 23:54:30.298548 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.299357 kubelet[2815]: E0416 23:54:30.298828 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.299357 kubelet[2815]: W0416 23:54:30.298846 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.299357 kubelet[2815]: E0416 23:54:30.298858 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.299357 kubelet[2815]: E0416 23:54:30.299124 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.299357 kubelet[2815]: W0416 23:54:30.299134 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.299357 kubelet[2815]: E0416 23:54:30.299144 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.299625 kubelet[2815]: E0416 23:54:30.299483 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.299625 kubelet[2815]: W0416 23:54:30.299494 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.299625 kubelet[2815]: E0416 23:54:30.299505 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.299845 kubelet[2815]: E0416 23:54:30.299768 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.299845 kubelet[2815]: W0416 23:54:30.299802 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.299845 kubelet[2815]: E0416 23:54:30.299812 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.300125 kubelet[2815]: E0416 23:54:30.300088 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.300125 kubelet[2815]: W0416 23:54:30.300098 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.300125 kubelet[2815]: E0416 23:54:30.300109 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.300467 kubelet[2815]: E0416 23:54:30.300449 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.300467 kubelet[2815]: W0416 23:54:30.300463 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.300600 kubelet[2815]: E0416 23:54:30.300474 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.300752 kubelet[2815]: E0416 23:54:30.300718 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.300752 kubelet[2815]: W0416 23:54:30.300727 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.300752 kubelet[2815]: E0416 23:54:30.300738 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.301030 kubelet[2815]: E0416 23:54:30.301009 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.301030 kubelet[2815]: W0416 23:54:30.301024 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.301030 kubelet[2815]: E0416 23:54:30.301034 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.301338 kubelet[2815]: E0416 23:54:30.301278 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.301338 kubelet[2815]: W0416 23:54:30.301290 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.301338 kubelet[2815]: E0416 23:54:30.301299 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.302532 kubelet[2815]: E0416 23:54:30.301775 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.302532 kubelet[2815]: W0416 23:54:30.301788 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.302532 kubelet[2815]: E0416 23:54:30.301800 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.304566 kubelet[2815]: E0416 23:54:30.304514 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.304566 kubelet[2815]: W0416 23:54:30.304537 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.304566 kubelet[2815]: E0416 23:54:30.304552 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.305413 kubelet[2815]: E0416 23:54:30.305383 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.305413 kubelet[2815]: W0416 23:54:30.305401 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.305413 kubelet[2815]: E0416 23:54:30.305414 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.306351 kubelet[2815]: E0416 23:54:30.306209 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.306351 kubelet[2815]: W0416 23:54:30.306224 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.306351 kubelet[2815]: E0416 23:54:30.306237 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.306681 kubelet[2815]: E0416 23:54:30.306652 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.306681 kubelet[2815]: W0416 23:54:30.306666 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.306681 kubelet[2815]: E0416 23:54:30.306678 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.307038 kubelet[2815]: E0416 23:54:30.307022 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.307038 kubelet[2815]: W0416 23:54:30.307035 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.307107 kubelet[2815]: E0416 23:54:30.307047 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.307422 kubelet[2815]: E0416 23:54:30.307393 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.307422 kubelet[2815]: W0416 23:54:30.307407 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.307422 kubelet[2815]: E0416 23:54:30.307419 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.307736 kubelet[2815]: E0416 23:54:30.307716 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.307736 kubelet[2815]: W0416 23:54:30.307728 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.307916 kubelet[2815]: E0416 23:54:30.307739 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.308060 kubelet[2815]: E0416 23:54:30.308035 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.308060 kubelet[2815]: W0416 23:54:30.308047 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.308060 kubelet[2815]: E0416 23:54:30.308058 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.308468 kubelet[2815]: E0416 23:54:30.308410 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.308468 kubelet[2815]: W0416 23:54:30.308455 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.308468 kubelet[2815]: E0416 23:54:30.308468 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.310053 kubelet[2815]: E0416 23:54:30.310022 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.310166 kubelet[2815]: W0416 23:54:30.310150 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.310270 kubelet[2815]: E0416 23:54:30.310256 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.310757 kubelet[2815]: E0416 23:54:30.310741 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.310851 kubelet[2815]: W0416 23:54:30.310837 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.310926 kubelet[2815]: E0416 23:54:30.310914 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.311272 kubelet[2815]: E0416 23:54:30.311258 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.311451 kubelet[2815]: W0416 23:54:30.311363 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.311451 kubelet[2815]: E0416 23:54:30.311379 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.312011 kubelet[2815]: E0416 23:54:30.311951 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.312011 kubelet[2815]: W0416 23:54:30.311966 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.312251 kubelet[2815]: E0416 23:54:30.311979 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.313535 kubelet[2815]: E0416 23:54:30.313520 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.313624 kubelet[2815]: W0416 23:54:30.313611 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.313678 kubelet[2815]: E0416 23:54:30.313666 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.314362 kubelet[2815]: E0416 23:54:30.314194 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.314362 kubelet[2815]: W0416 23:54:30.314209 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.314362 kubelet[2815]: E0416 23:54:30.314221 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.314550 kubelet[2815]: E0416 23:54:30.314536 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.314602 kubelet[2815]: W0416 23:54:30.314590 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.314657 kubelet[2815]: E0416 23:54:30.314646 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.315810 kubelet[2815]: E0416 23:54:30.315435 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.315810 kubelet[2815]: W0416 23:54:30.315450 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.315810 kubelet[2815]: E0416 23:54:30.315463 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:30.316405 kubelet[2815]: E0416 23:54:30.316379 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:30.316405 kubelet[2815]: W0416 23:54:30.316391 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:30.316405 kubelet[2815]: E0416 23:54:30.316398 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.196650 kubelet[2815]: E0416 23:54:31.196281 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:31.273191 kubelet[2815]: I0416 23:54:31.273150 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:54:31.308587 kubelet[2815]: E0416 23:54:31.308503 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.308587 kubelet[2815]: W0416 23:54:31.308534 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.308587 kubelet[2815]: E0416 23:54:31.308560 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.308918 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.309987 kubelet[2815]: W0416 23:54:31.308931 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.308945 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.309309 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.309987 kubelet[2815]: W0416 23:54:31.309367 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.309383 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.309769 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.309987 kubelet[2815]: W0416 23:54:31.309784 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.309987 kubelet[2815]: E0416 23:54:31.309799 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.310657 kubelet[2815]: E0416 23:54:31.310161 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.310657 kubelet[2815]: W0416 23:54:31.310179 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.310657 kubelet[2815]: E0416 23:54:31.310199 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.310657 kubelet[2815]: E0416 23:54:31.310608 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.310657 kubelet[2815]: W0416 23:54:31.310626 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.310657 kubelet[2815]: E0416 23:54:31.310647 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.311114 kubelet[2815]: E0416 23:54:31.311041 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.311114 kubelet[2815]: W0416 23:54:31.311055 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.311114 kubelet[2815]: E0416 23:54:31.311070 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.311595 kubelet[2815]: E0416 23:54:31.311534 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.311595 kubelet[2815]: W0416 23:54:31.311563 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.311595 kubelet[2815]: E0416 23:54:31.311586 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.312103 kubelet[2815]: E0416 23:54:31.312050 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.312103 kubelet[2815]: W0416 23:54:31.312071 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.312103 kubelet[2815]: E0416 23:54:31.312088 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.312501 kubelet[2815]: E0416 23:54:31.312456 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.312501 kubelet[2815]: W0416 23:54:31.312476 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.312501 kubelet[2815]: E0416 23:54:31.312492 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.312931 kubelet[2815]: E0416 23:54:31.312816 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.312931 kubelet[2815]: W0416 23:54:31.312830 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.312931 kubelet[2815]: E0416 23:54:31.312844 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.313346 kubelet[2815]: E0416 23:54:31.313271 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.313346 kubelet[2815]: W0416 23:54:31.313285 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.313346 kubelet[2815]: E0416 23:54:31.313301 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.313792 kubelet[2815]: E0416 23:54:31.313703 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.313792 kubelet[2815]: W0416 23:54:31.313717 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.313792 kubelet[2815]: E0416 23:54:31.313732 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.314179 kubelet[2815]: E0416 23:54:31.314123 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.314179 kubelet[2815]: W0416 23:54:31.314158 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.314179 kubelet[2815]: E0416 23:54:31.314177 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.314595 kubelet[2815]: E0416 23:54:31.314572 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.314595 kubelet[2815]: W0416 23:54:31.314592 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.314729 kubelet[2815]: E0416 23:54:31.314608 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.315216 kubelet[2815]: E0416 23:54:31.315182 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.315216 kubelet[2815]: W0416 23:54:31.315204 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.315355 kubelet[2815]: E0416 23:54:31.315219 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.315700 kubelet[2815]: E0416 23:54:31.315658 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.315700 kubelet[2815]: W0416 23:54:31.315681 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.315805 kubelet[2815]: E0416 23:54:31.315701 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.316222 kubelet[2815]: E0416 23:54:31.316190 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.316222 kubelet[2815]: W0416 23:54:31.316213 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.316356 kubelet[2815]: E0416 23:54:31.316228 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.316773 kubelet[2815]: E0416 23:54:31.316745 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.316773 kubelet[2815]: W0416 23:54:31.316765 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.316915 kubelet[2815]: E0416 23:54:31.316780 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.317274 kubelet[2815]: E0416 23:54:31.317222 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.317274 kubelet[2815]: W0416 23:54:31.317242 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.317274 kubelet[2815]: E0416 23:54:31.317256 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.317751 kubelet[2815]: E0416 23:54:31.317719 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.317751 kubelet[2815]: W0416 23:54:31.317739 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.317842 kubelet[2815]: E0416 23:54:31.317755 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.318271 kubelet[2815]: E0416 23:54:31.318237 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.318271 kubelet[2815]: W0416 23:54:31.318258 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.318412 kubelet[2815]: E0416 23:54:31.318294 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.318855 kubelet[2815]: E0416 23:54:31.318821 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.318855 kubelet[2815]: W0416 23:54:31.318840 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.318855 kubelet[2815]: E0416 23:54:31.318855 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.319396 kubelet[2815]: E0416 23:54:31.319306 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.319396 kubelet[2815]: W0416 23:54:31.319381 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.319396 kubelet[2815]: E0416 23:54:31.319396 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.320083 kubelet[2815]: E0416 23:54:31.320058 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.320083 kubelet[2815]: W0416 23:54:31.320080 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.320211 kubelet[2815]: E0416 23:54:31.320096 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.320516 kubelet[2815]: E0416 23:54:31.320479 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.320516 kubelet[2815]: W0416 23:54:31.320502 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.320652 kubelet[2815]: E0416 23:54:31.320516 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.320951 kubelet[2815]: E0416 23:54:31.320910 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.320951 kubelet[2815]: W0416 23:54:31.320936 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.320951 kubelet[2815]: E0416 23:54:31.320955 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.321430 kubelet[2815]: E0416 23:54:31.321408 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.321430 kubelet[2815]: W0416 23:54:31.321428 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.321621 kubelet[2815]: E0416 23:54:31.321444 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.322362 kubelet[2815]: E0416 23:54:31.321900 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.322362 kubelet[2815]: W0416 23:54:31.321922 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.322362 kubelet[2815]: E0416 23:54:31.321940 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.322969 kubelet[2815]: E0416 23:54:31.322928 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.322969 kubelet[2815]: W0416 23:54:31.322957 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.323080 kubelet[2815]: E0416 23:54:31.322981 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.323478 kubelet[2815]: E0416 23:54:31.323444 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.323478 kubelet[2815]: W0416 23:54:31.323468 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.323574 kubelet[2815]: E0416 23:54:31.323486 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.323981 kubelet[2815]: E0416 23:54:31.323946 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.323981 kubelet[2815]: W0416 23:54:31.323970 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.324078 kubelet[2815]: E0416 23:54:31.323988 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.325437 kubelet[2815]: E0416 23:54:31.325401 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:54:31.325437 kubelet[2815]: W0416 23:54:31.325426 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:54:31.325549 kubelet[2815]: E0416 23:54:31.325445 2815 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:54:31.579180 containerd[1628]: time="2026-04-16T23:54:31.579054987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:31.580058 containerd[1628]: time="2026-04-16T23:54:31.579977701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 23:54:31.580846 containerd[1628]: time="2026-04-16T23:54:31.580828665Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:31.582687 containerd[1628]: time="2026-04-16T23:54:31.582671623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:31.582964 containerd[1628]: time="2026-04-16T23:54:31.582944167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.822386947s" Apr 16 23:54:31.582995 containerd[1628]: time="2026-04-16T23:54:31.582965897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 23:54:31.586343 containerd[1628]: time="2026-04-16T23:54:31.586303267Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:54:31.595202 containerd[1628]: time="2026-04-16T23:54:31.595138999Z" level=info msg="Container 37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:31.601246 containerd[1628]: time="2026-04-16T23:54:31.601229000Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777\"" Apr 16 23:54:31.601641 containerd[1628]: time="2026-04-16T23:54:31.601581774Z" level=info msg="StartContainer for \"37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777\"" Apr 16 23:54:31.602662 containerd[1628]: time="2026-04-16T23:54:31.602636084Z" level=info msg="connecting to shim 37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777" address="unix:///run/containerd/s/b2516eacac79bff9c202a7c59ea65d905b6ed65eeda7e6e4e35f07081388235d" protocol=ttrpc version=3 Apr 16 23:54:31.619420 systemd[1]: Started cri-containerd-37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777.scope - libcontainer container 37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777. Apr 16 23:54:31.670209 containerd[1628]: time="2026-04-16T23:54:31.670177563Z" level=info msg="StartContainer for \"37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777\" returns successfully" Apr 16 23:54:31.678985 systemd[1]: cri-containerd-37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777.scope: Deactivated successfully. Apr 16 23:54:31.682098 containerd[1628]: time="2026-04-16T23:54:31.682011741Z" level=info msg="received container exit event container_id:\"37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777\" id:\"37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777\" pid:3500 exited_at:{seconds:1776383671 nanos:681734786}" Apr 16 23:54:31.698168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37b32e50218b1a829fd52f687eca507e56581cd7dcea1a6b0bd0671f942dd777-rootfs.mount: Deactivated successfully. Apr 16 23:54:32.281397 containerd[1628]: time="2026-04-16T23:54:32.281237283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:54:32.303560 kubelet[2815]: I0416 23:54:32.303474 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5488f574cb-6zvpb" podStartSLOduration=3.6569307699999998 podStartE2EDuration="6.30342023s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:27.113955733 +0000 UTC m=+17.006399844" lastFinishedPulling="2026-04-16 23:54:29.760445193 +0000 UTC m=+19.652889304" observedRunningTime="2026-04-16 23:54:30.29157844 +0000 UTC m=+20.184022581" watchObservedRunningTime="2026-04-16 23:54:32.30342023 +0000 UTC m=+22.195864381" Apr 16 23:54:33.195808 kubelet[2815]: E0416 23:54:33.195761 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:35.195840 kubelet[2815]: E0416 23:54:35.195732 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:37.195789 kubelet[2815]: E0416 23:54:37.195711 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:39.195894 kubelet[2815]: E0416 23:54:39.195739 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:41.195998 kubelet[2815]: E0416 23:54:41.195941 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:41.636109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112718496.mount: Deactivated successfully. Apr 16 23:54:41.662812 containerd[1628]: time="2026-04-16T23:54:41.662756970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:41.663668 containerd[1628]: time="2026-04-16T23:54:41.663631842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 23:54:41.664718 containerd[1628]: time="2026-04-16T23:54:41.664689382Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:41.666510 containerd[1628]: time="2026-04-16T23:54:41.666369266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:41.669262 containerd[1628]: time="2026-04-16T23:54:41.669232819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.386813975s" Apr 16 23:54:41.669305 containerd[1628]: time="2026-04-16T23:54:41.669265768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 23:54:41.674128 containerd[1628]: time="2026-04-16T23:54:41.674104032Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:54:41.682879 containerd[1628]: time="2026-04-16T23:54:41.682844927Z" level=info msg="Container 801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:41.690033 containerd[1628]: time="2026-04-16T23:54:41.690003069Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1\"" Apr 16 23:54:41.691347 containerd[1628]: time="2026-04-16T23:54:41.690707252Z" level=info msg="StartContainer for \"801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1\"" Apr 16 23:54:41.691905 containerd[1628]: time="2026-04-16T23:54:41.691757062Z" level=info msg="connecting to shim 801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1" address="unix:///run/containerd/s/b2516eacac79bff9c202a7c59ea65d905b6ed65eeda7e6e4e35f07081388235d" protocol=ttrpc version=3 Apr 16 23:54:41.709416 systemd[1]: Started cri-containerd-801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1.scope - libcontainer container 801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1. Apr 16 23:54:41.763582 containerd[1628]: time="2026-04-16T23:54:41.763502983Z" level=info msg="StartContainer for \"801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1\" returns successfully" Apr 16 23:54:41.797554 systemd[1]: cri-containerd-801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1.scope: Deactivated successfully. Apr 16 23:54:41.800048 containerd[1628]: time="2026-04-16T23:54:41.799996433Z" level=info msg="received container exit event container_id:\"801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1\" id:\"801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1\" pid:3556 exited_at:{seconds:1776383681 nanos:799213651}" Apr 16 23:54:41.819270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-801b1830d5639606f84ad654dc5dde112b08fc44a4b0ecc2b16ad2a4c1fdcad1-rootfs.mount: Deactivated successfully. Apr 16 23:54:42.312945 containerd[1628]: time="2026-04-16T23:54:42.312873098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:54:43.195617 kubelet[2815]: E0416 23:54:43.195527 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:45.196292 kubelet[2815]: E0416 23:54:45.196202 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:47.196231 kubelet[2815]: E0416 23:54:47.196135 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tscd9" podUID="99b3436d-dcd4-48c2-849e-f8b1845df2b4" Apr 16 23:54:47.356427 containerd[1628]: time="2026-04-16T23:54:47.356371218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:47.357688 containerd[1628]: time="2026-04-16T23:54:47.357554231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 23:54:47.358599 containerd[1628]: time="2026-04-16T23:54:47.358576563Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:47.360419 containerd[1628]: time="2026-04-16T23:54:47.360388001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:47.360915 containerd[1628]: time="2026-04-16T23:54:47.360891258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.047617845s" Apr 16 23:54:47.360967 containerd[1628]: time="2026-04-16T23:54:47.360957968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 23:54:47.367448 containerd[1628]: time="2026-04-16T23:54:47.367295655Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:54:47.380015 containerd[1628]: time="2026-04-16T23:54:47.379925511Z" level=info msg="Container 687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:47.381896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653930720.mount: Deactivated successfully. Apr 16 23:54:47.389029 containerd[1628]: time="2026-04-16T23:54:47.388989691Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c\"" Apr 16 23:54:47.389429 containerd[1628]: time="2026-04-16T23:54:47.389387428Z" level=info msg="StartContainer for \"687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c\"" Apr 16 23:54:47.390263 containerd[1628]: time="2026-04-16T23:54:47.390239493Z" level=info msg="connecting to shim 687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c" address="unix:///run/containerd/s/b2516eacac79bff9c202a7c59ea65d905b6ed65eeda7e6e4e35f07081388235d" protocol=ttrpc version=3 Apr 16 23:54:47.410445 systemd[1]: Started cri-containerd-687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c.scope - libcontainer container 687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c. Apr 16 23:54:47.465387 containerd[1628]: time="2026-04-16T23:54:47.465239264Z" level=info msg="StartContainer for \"687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c\" returns successfully" Apr 16 23:54:47.873506 containerd[1628]: time="2026-04-16T23:54:47.873426979Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:54:47.875301 systemd[1]: cri-containerd-687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c.scope: Deactivated successfully. Apr 16 23:54:47.875723 systemd[1]: cri-containerd-687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c.scope: Consumed 381ms CPU time, 189.4M memory peak, 1.3M read from disk, 177M written to disk. Apr 16 23:54:47.876855 containerd[1628]: time="2026-04-16T23:54:47.876805896Z" level=info msg="received container exit event container_id:\"687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c\" id:\"687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c\" pid:3615 exited_at:{seconds:1776383687 nanos:876652827}" Apr 16 23:54:47.905083 kubelet[2815]: I0416 23:54:47.905019 2815 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 23:54:47.910687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687a3c7a6748e64e63824503653fa44e0ff10d43020e705c37c4bf86978b818c-rootfs.mount: Deactivated successfully. Apr 16 23:54:47.951247 systemd[1]: Created slice kubepods-besteffort-pod758506dc_7396_4241_8c8f_cb8d84b626aa.slice - libcontainer container kubepods-besteffort-pod758506dc_7396_4241_8c8f_cb8d84b626aa.slice. Apr 16 23:54:47.961208 systemd[1]: Created slice kubepods-besteffort-pode1045738_997e_49eb_b29a_a871469ada71.slice - libcontainer container kubepods-besteffort-pode1045738_997e_49eb_b29a_a871469ada71.slice. Apr 16 23:54:47.979567 systemd[1]: Created slice kubepods-besteffort-pod57e87f13_ad1b_425d_b4a8_26c1596655e3.slice - libcontainer container kubepods-besteffort-pod57e87f13_ad1b_425d_b4a8_26c1596655e3.slice. Apr 16 23:54:47.988657 systemd[1]: Created slice kubepods-burstable-podeedea204_8286_4b2e_b09c_075059e45144.slice - libcontainer container kubepods-burstable-podeedea204_8286_4b2e_b09c_075059e45144.slice. Apr 16 23:54:47.998123 systemd[1]: Created slice kubepods-besteffort-podf0b43771_567b_4f37_b976_f139d1221148.slice - libcontainer container kubepods-besteffort-podf0b43771_567b_4f37_b976_f139d1221148.slice. Apr 16 23:54:48.010855 systemd[1]: Created slice kubepods-besteffort-pode4f842a3_14cd_4a8b_9d17_87a0aab8d812.slice - libcontainer container kubepods-besteffort-pode4f842a3_14cd_4a8b_9d17_87a0aab8d812.slice. Apr 16 23:54:48.021688 systemd[1]: Created slice kubepods-burstable-pod9123780b_75f4_4ed3_8d85_17e9f594baa9.slice - libcontainer container kubepods-burstable-pod9123780b_75f4_4ed3_8d85_17e9f594baa9.slice. Apr 16 23:54:48.137452 kubelet[2815]: I0416 23:54:48.136787 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1045738-997e-49eb-b29a-a871469ada71-whisker-backend-key-pair\") pod \"whisker-7ffbff66d5-nff9v\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " pod="calico-system/whisker-7ffbff66d5-nff9v" Apr 16 23:54:48.137452 kubelet[2815]: I0416 23:54:48.136854 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8dt\" (UniqueName: \"kubernetes.io/projected/57e87f13-ad1b-425d-b4a8-26c1596655e3-kube-api-access-fw8dt\") pod \"calico-apiserver-7c678cb499-dnnwl\" (UID: \"57e87f13-ad1b-425d-b4a8-26c1596655e3\") " pod="calico-system/calico-apiserver-7c678cb499-dnnwl" Apr 16 23:54:48.137452 kubelet[2815]: I0416 23:54:48.136884 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0b43771-567b-4f37-b976-f139d1221148-calico-apiserver-certs\") pod \"calico-apiserver-7c678cb499-bdvmw\" (UID: \"f0b43771-567b-4f37-b976-f139d1221148\") " pod="calico-system/calico-apiserver-7c678cb499-bdvmw" Apr 16 23:54:48.137452 kubelet[2815]: I0416 23:54:48.136913 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtd6\" (UniqueName: \"kubernetes.io/projected/758506dc-7396-4241-8c8f-cb8d84b626aa-kube-api-access-9gtd6\") pod \"calico-kube-controllers-549c89df8f-4nr8q\" (UID: \"758506dc-7396-4241-8c8f-cb8d84b626aa\") " pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" Apr 16 23:54:48.137452 kubelet[2815]: I0416 23:54:48.136938 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7xt\" (UniqueName: \"kubernetes.io/projected/e1045738-997e-49eb-b29a-a871469ada71-kube-api-access-sn7xt\") pod \"whisker-7ffbff66d5-nff9v\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " pod="calico-system/whisker-7ffbff66d5-nff9v" Apr 16 23:54:48.137874 kubelet[2815]: I0416 23:54:48.136962 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2wh\" (UniqueName: \"kubernetes.io/projected/e4f842a3-14cd-4a8b-9d17-87a0aab8d812-kube-api-access-mn2wh\") pod \"goldmane-cccfbd5cf-6ddxr\" (UID: \"e4f842a3-14cd-4a8b-9d17-87a0aab8d812\") " pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.137874 kubelet[2815]: I0416 23:54:48.136985 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btg96\" (UniqueName: \"kubernetes.io/projected/9123780b-75f4-4ed3-8d85-17e9f594baa9-kube-api-access-btg96\") pod \"coredns-66bc5c9577-qpprd\" (UID: \"9123780b-75f4-4ed3-8d85-17e9f594baa9\") " pod="kube-system/coredns-66bc5c9577-qpprd" Apr 16 23:54:48.137874 kubelet[2815]: I0416 23:54:48.137006 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bbp\" (UniqueName: \"kubernetes.io/projected/f0b43771-567b-4f37-b976-f139d1221148-kube-api-access-t6bbp\") pod \"calico-apiserver-7c678cb499-bdvmw\" (UID: \"f0b43771-567b-4f37-b976-f139d1221148\") " pod="calico-system/calico-apiserver-7c678cb499-bdvmw" Apr 16 23:54:48.137874 kubelet[2815]: I0416 23:54:48.137030 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9123780b-75f4-4ed3-8d85-17e9f594baa9-config-volume\") pod \"coredns-66bc5c9577-qpprd\" (UID: \"9123780b-75f4-4ed3-8d85-17e9f594baa9\") " pod="kube-system/coredns-66bc5c9577-qpprd" Apr 16 23:54:48.137874 kubelet[2815]: I0416 23:54:48.137055 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/758506dc-7396-4241-8c8f-cb8d84b626aa-tigera-ca-bundle\") pod \"calico-kube-controllers-549c89df8f-4nr8q\" (UID: \"758506dc-7396-4241-8c8f-cb8d84b626aa\") " pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" Apr 16 23:54:48.138078 kubelet[2815]: I0416 23:54:48.137077 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedea204-8286-4b2e-b09c-075059e45144-config-volume\") pod \"coredns-66bc5c9577-76hxx\" (UID: \"eedea204-8286-4b2e-b09c-075059e45144\") " pod="kube-system/coredns-66bc5c9577-76hxx" Apr 16 23:54:48.138078 kubelet[2815]: I0416 23:54:48.137103 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f842a3-14cd-4a8b-9d17-87a0aab8d812-config\") pod \"goldmane-cccfbd5cf-6ddxr\" (UID: \"e4f842a3-14cd-4a8b-9d17-87a0aab8d812\") " pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.138078 kubelet[2815]: I0416 23:54:48.137125 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4f842a3-14cd-4a8b-9d17-87a0aab8d812-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-6ddxr\" (UID: \"e4f842a3-14cd-4a8b-9d17-87a0aab8d812\") " pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.138078 kubelet[2815]: I0416 23:54:48.137152 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e4f842a3-14cd-4a8b-9d17-87a0aab8d812-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-6ddxr\" (UID: \"e4f842a3-14cd-4a8b-9d17-87a0aab8d812\") " pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.138078 kubelet[2815]: I0416 23:54:48.137172 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-nginx-config\") pod \"whisker-7ffbff66d5-nff9v\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " pod="calico-system/whisker-7ffbff66d5-nff9v" Apr 16 23:54:48.138299 kubelet[2815]: I0416 23:54:48.137194 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-whisker-ca-bundle\") pod \"whisker-7ffbff66d5-nff9v\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " pod="calico-system/whisker-7ffbff66d5-nff9v" Apr 16 23:54:48.138299 kubelet[2815]: I0416 23:54:48.137215 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57e87f13-ad1b-425d-b4a8-26c1596655e3-calico-apiserver-certs\") pod \"calico-apiserver-7c678cb499-dnnwl\" (UID: \"57e87f13-ad1b-425d-b4a8-26c1596655e3\") " pod="calico-system/calico-apiserver-7c678cb499-dnnwl" Apr 16 23:54:48.138299 kubelet[2815]: I0416 23:54:48.137237 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2j7\" (UniqueName: \"kubernetes.io/projected/eedea204-8286-4b2e-b09c-075059e45144-kube-api-access-8l2j7\") pod \"coredns-66bc5c9577-76hxx\" (UID: \"eedea204-8286-4b2e-b09c-075059e45144\") " pod="kube-system/coredns-66bc5c9577-76hxx" Apr 16 23:54:48.296442 containerd[1628]: time="2026-04-16T23:54:48.296410589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-76hxx,Uid:eedea204-8286-4b2e-b09c-075059e45144,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:48.306457 containerd[1628]: time="2026-04-16T23:54:48.306291748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-bdvmw,Uid:f0b43771-567b-4f37-b976-f139d1221148,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:48.318682 containerd[1628]: time="2026-04-16T23:54:48.318496381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6ddxr,Uid:e4f842a3-14cd-4a8b-9d17-87a0aab8d812,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:48.326655 containerd[1628]: time="2026-04-16T23:54:48.326616531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpprd,Uid:9123780b-75f4-4ed3-8d85-17e9f594baa9,Namespace:kube-system,Attempt:0,}" Apr 16 23:54:48.352184 containerd[1628]: time="2026-04-16T23:54:48.352149310Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:54:48.364981 containerd[1628]: time="2026-04-16T23:54:48.364711872Z" level=info msg="Container 3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:48.390344 containerd[1628]: time="2026-04-16T23:54:48.389900024Z" level=info msg="CreateContainer within sandbox \"55775433a9eb8182510667ddad71165e9a83cf6124d77dd20212a06c251cc10f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9\"" Apr 16 23:54:48.393841 containerd[1628]: time="2026-04-16T23:54:48.393803360Z" level=info msg="StartContainer for \"3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9\"" Apr 16 23:54:48.396269 containerd[1628]: time="2026-04-16T23:54:48.396236525Z" level=info msg="connecting to shim 3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9" address="unix:///run/containerd/s/b2516eacac79bff9c202a7c59ea65d905b6ed65eeda7e6e4e35f07081388235d" protocol=ttrpc version=3 Apr 16 23:54:48.429982 containerd[1628]: time="2026-04-16T23:54:48.429945804Z" level=error msg="Failed to destroy network for sandbox \"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.433200 systemd[1]: run-netns-cni\x2d17fa395f\x2d83bf\x2d39d9\x2d4522\x2df69713e41cf7.mount: Deactivated successfully. Apr 16 23:54:48.433556 containerd[1628]: time="2026-04-16T23:54:48.433406241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpprd,Uid:9123780b-75f4-4ed3-8d85-17e9f594baa9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.434352 kubelet[2815]: E0416 23:54:48.433597 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.434352 kubelet[2815]: E0416 23:54:48.433649 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qpprd" Apr 16 23:54:48.434352 kubelet[2815]: E0416 23:54:48.433665 2815 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qpprd" Apr 16 23:54:48.434618 kubelet[2815]: E0416 23:54:48.433704 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qpprd_kube-system(9123780b-75f4-4ed3-8d85-17e9f594baa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qpprd_kube-system(9123780b-75f4-4ed3-8d85-17e9f594baa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bb2eefb3ba27f562e139ae4828b51e6af574fb1ff619059477fb501c434140d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qpprd" podUID="9123780b-75f4-4ed3-8d85-17e9f594baa9" Apr 16 23:54:48.440531 systemd[1]: Started cri-containerd-3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9.scope - libcontainer container 3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9. Apr 16 23:54:48.453687 containerd[1628]: time="2026-04-16T23:54:48.453560456Z" level=error msg="Failed to destroy network for sandbox \"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.456233 systemd[1]: run-netns-cni\x2d44ef0af6\x2d4c96\x2d9423\x2d9027\x2d41398afca27f.mount: Deactivated successfully. Apr 16 23:54:48.459556 containerd[1628]: time="2026-04-16T23:54:48.457490751Z" level=error msg="Failed to destroy network for sandbox \"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.459556 containerd[1628]: time="2026-04-16T23:54:48.458386965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-bdvmw,Uid:f0b43771-567b-4f37-b976-f139d1221148,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.461339 kubelet[2815]: E0416 23:54:48.459758 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.461339 kubelet[2815]: E0416 23:54:48.459805 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c678cb499-bdvmw" Apr 16 23:54:48.461339 kubelet[2815]: E0416 23:54:48.459819 2815 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c678cb499-bdvmw" Apr 16 23:54:48.460919 systemd[1]: run-netns-cni\x2d3f24e5d6\x2de680\x2d8169\x2dab69\x2d8a48f1f47b70.mount: Deactivated successfully. Apr 16 23:54:48.461513 kubelet[2815]: E0416 23:54:48.459862 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c678cb499-bdvmw_calico-system(f0b43771-567b-4f37-b976-f139d1221148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c678cb499-bdvmw_calico-system(f0b43771-567b-4f37-b976-f139d1221148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16ad8526d82d6ba43af00aff3500ce6120665146cf3b5df3ceb96fde512a6cf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c678cb499-bdvmw" podUID="f0b43771-567b-4f37-b976-f139d1221148" Apr 16 23:54:48.462526 containerd[1628]: time="2026-04-16T23:54:48.462463370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-76hxx,Uid:eedea204-8286-4b2e-b09c-075059e45144,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.462608 containerd[1628]: time="2026-04-16T23:54:48.462595479Z" level=error msg="Failed to destroy network for sandbox \"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.462844 kubelet[2815]: E0416 23:54:48.462814 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.463245 kubelet[2815]: E0416 23:54:48.463212 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-76hxx" Apr 16 23:54:48.463245 kubelet[2815]: E0416 23:54:48.463231 2815 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-76hxx" Apr 16 23:54:48.463384 kubelet[2815]: E0416 23:54:48.463257 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-76hxx_kube-system(eedea204-8286-4b2e-b09c-075059e45144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-76hxx_kube-system(eedea204-8286-4b2e-b09c-075059e45144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b22bb5ebf92376ba56e66bea46a2a630fb15256961a12965daed1693b4de972d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-76hxx" podUID="eedea204-8286-4b2e-b09c-075059e45144" Apr 16 23:54:48.464049 containerd[1628]: time="2026-04-16T23:54:48.463992240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6ddxr,Uid:e4f842a3-14cd-4a8b-9d17-87a0aab8d812,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.464205 kubelet[2815]: E0416 23:54:48.464169 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.464280 kubelet[2815]: E0416 23:54:48.464266 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.464424 kubelet[2815]: E0416 23:54:48.464368 2815 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6ddxr" Apr 16 23:54:48.464515 kubelet[2815]: E0416 23:54:48.464502 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-6ddxr_calico-system(e4f842a3-14cd-4a8b-9d17-87a0aab8d812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-6ddxr_calico-system(e4f842a3-14cd-4a8b-9d17-87a0aab8d812)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0915d35ee851df507c92b65db532fcfc60c9f797adbd136907b034755f9fd308\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-6ddxr" podUID="e4f842a3-14cd-4a8b-9d17-87a0aab8d812" Apr 16 23:54:48.516955 containerd[1628]: time="2026-04-16T23:54:48.516918698Z" level=info msg="StartContainer for \"3f55f67a081442dc010cde4a82d52c253eddafa9678f088c77b19c287f04d6f9\" returns successfully" Apr 16 23:54:48.557632 containerd[1628]: time="2026-04-16T23:54:48.557546744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549c89df8f-4nr8q,Uid:758506dc-7396-4241-8c8f-cb8d84b626aa,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:48.566797 containerd[1628]: time="2026-04-16T23:54:48.566709817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ffbff66d5-nff9v,Uid:e1045738-997e-49eb-b29a-a871469ada71,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:48.587685 containerd[1628]: time="2026-04-16T23:54:48.587527916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-dnnwl,Uid:57e87f13-ad1b-425d-b4a8-26c1596655e3,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.683 [INFO][3844] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.688 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" iface="eth0" netns="/var/run/netns/cni-63a49802-362d-c332-756c-9a1c43685b18" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.688 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" iface="eth0" netns="/var/run/netns/cni-63a49802-362d-c332-756c-9a1c43685b18" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.688 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" iface="eth0" netns="/var/run/netns/cni-63a49802-362d-c332-756c-9a1c43685b18" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.688 [INFO][3844] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.688 [INFO][3844] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.720 [INFO][3874] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" HandleID="k8s-pod-network.39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.720 [INFO][3874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:48.774749 containerd[1628]: 2026-04-16 23:54:48.762 [INFO][3874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:48.775142 containerd[1628]: 2026-04-16 23:54:48.767 [WARNING][3874] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" HandleID="k8s-pod-network.39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:48.775142 containerd[1628]: 2026-04-16 23:54:48.767 [INFO][3874] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" HandleID="k8s-pod-network.39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:48.775142 containerd[1628]: 2026-04-16 23:54:48.768 [INFO][3874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:48.775142 containerd[1628]: 2026-04-16 23:54:48.770 [INFO][3844] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378" Apr 16 23:54:48.776865 systemd-networkd[1508]: calid26326f6b08: Link UP Apr 16 23:54:48.778663 containerd[1628]: time="2026-04-16T23:54:48.777927734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549c89df8f-4nr8q,Uid:758506dc-7396-4241-8c8f-cb8d84b626aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.778553 systemd-networkd[1508]: calid26326f6b08: Gained carrier Apr 16 23:54:48.778843 kubelet[2815]: E0416 23:54:48.778818 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.778877 kubelet[2815]: E0416 23:54:48.778861 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" Apr 16 23:54:48.778899 kubelet[2815]: E0416 23:54:48.778877 2815 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" Apr 16 23:54:48.778934 kubelet[2815]: E0416 23:54:48.778919 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549c89df8f-4nr8q_calico-system(758506dc-7396-4241-8c8f-cb8d84b626aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549c89df8f-4nr8q_calico-system(758506dc-7396-4241-8c8f-cb8d84b626aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39cc0ecad0c3963bab10d43fa063159d0f87ddd6242866133bba429ad8be9378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" podUID="758506dc-7396-4241-8c8f-cb8d84b626aa" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.682 [INFO][3852] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.682 [INFO][3852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" iface="eth0" netns="/var/run/netns/cni-af21adf6-76f7-b7d0-4ae8-bf1eb1f8950c" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.683 [INFO][3852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" iface="eth0" netns="/var/run/netns/cni-af21adf6-76f7-b7d0-4ae8-bf1eb1f8950c" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.683 [INFO][3852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" iface="eth0" netns="/var/run/netns/cni-af21adf6-76f7-b7d0-4ae8-bf1eb1f8950c" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.683 [INFO][3852] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.683 [INFO][3852] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.727 [INFO][3868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" HandleID="k8s-pod-network.91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--7ffbff66d5--nff9v-eth0" Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.727 [INFO][3868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:48.790933 containerd[1628]: 2026-04-16 23:54:48.768 [INFO][3868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:48.791195 containerd[1628]: 2026-04-16 23:54:48.777 [WARNING][3868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" HandleID="k8s-pod-network.91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--7ffbff66d5--nff9v-eth0" Apr 16 23:54:48.791195 containerd[1628]: 2026-04-16 23:54:48.777 [INFO][3868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" HandleID="k8s-pod-network.91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--7ffbff66d5--nff9v-eth0" Apr 16 23:54:48.791195 containerd[1628]: 2026-04-16 23:54:48.780 [INFO][3868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:48.791195 containerd[1628]: 2026-04-16 23:54:48.784 [INFO][3852] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004" Apr 16 23:54:48.792752 containerd[1628]: time="2026-04-16T23:54:48.792641312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ffbff66d5-nff9v,Uid:e1045738-997e-49eb-b29a-a871469ada71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.793499 kubelet[2815]: E0416 23:54:48.793449 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:54:48.793551 kubelet[2815]: E0416 23:54:48.793508 2815 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91df9b4db4c6e8d44a61f33cf8ca6f590dc17d8012151b0b30c76cd9f59e9004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7ffbff66d5-nff9v" Apr 16 23:54:48.793930 containerd[1628]: 2026-04-16 23:54:48.654 [ERROR][3823] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:54:48.793930 containerd[1628]: 2026-04-16 23:54:48.674 [INFO][3823] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0 calico-apiserver-7c678cb499- calico-system 57e87f13-ad1b-425d-b4a8-26c1596655e3 826 0 2026-04-16 23:54:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c678cb499 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 calico-apiserver-7c678cb499-dnnwl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid26326f6b08 [] [] }} ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-" Apr 16 23:54:48.793930 containerd[1628]: 2026-04-16 23:54:48.674 [INFO][3823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.793930 containerd[1628]: 2026-04-16 23:54:48.709 [INFO][3866] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" HandleID="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.718 [INFO][3866] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" HandleID="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"calico-apiserver-7c678cb499-dnnwl", "timestamp":"2026-04-16 23:54:48.709123205 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00043af20)} Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.718 [INFO][3866] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.718 [INFO][3866] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.718 [INFO][3866] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.723 [INFO][3866] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.727 [INFO][3866] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.734 [INFO][3866] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.736 [INFO][3866] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794048 containerd[1628]: 2026-04-16 23:54:48.738 [INFO][3866] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.738 [INFO][3866] ipam/ipam.go 588: Found unclaimed block in 2.115797ms host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.738 [INFO][3866] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.742 [INFO][3866] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.742 [INFO][3866] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.743 [INFO][3866] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.745 [INFO][3866] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.747 [INFO][3866] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.747 [INFO][3866] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.751 [INFO][3866] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.751 [INFO][3866] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.755 [INFO][3866] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794192 containerd[1628]: 2026-04-16 23:54:48.755 [INFO][3866] ipam/ipam.go 623: Block '192.168.105.128/26' has 64 free ips which is more than 1 ips required. host="ci-4459-2-4-n-84256b4514" subnet=192.168.105.128/26 Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.755 [INFO][3866] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.756 [INFO][3866] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1 Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.758 [INFO][3866] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.762 [INFO][3866] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.128/26] block=192.168.105.128/26 handle="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.762 [INFO][3866] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.128/26] handle="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.762 [INFO][3866] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:48.794396 containerd[1628]: 2026-04-16 23:54:48.762 [INFO][3866] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.128/26] IPv6=[] ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" HandleID="k8s-pod-network.29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.794514 containerd[1628]: 2026-04-16 23:54:48.765 [INFO][3823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0", GenerateName:"calico-apiserver-7c678cb499-", Namespace:"calico-system", SelfLink:"", UID:"57e87f13-ad1b-425d-b4a8-26c1596655e3", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c678cb499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"calico-apiserver-7c678cb499-dnnwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid26326f6b08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:48.794555 containerd[1628]: 2026-04-16 23:54:48.765 [INFO][3823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.128/32] ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.794555 containerd[1628]: 2026-04-16 23:54:48.765 [INFO][3823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid26326f6b08 ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.794555 containerd[1628]: 2026-04-16 23:54:48.779 [INFO][3823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.794604 containerd[1628]: 2026-04-16 23:54:48.780 [INFO][3823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0", GenerateName:"calico-apiserver-7c678cb499-", Namespace:"calico-system", SelfLink:"", UID:"57e87f13-ad1b-425d-b4a8-26c1596655e3", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c678cb499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1", Pod:"calico-apiserver-7c678cb499-dnnwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid26326f6b08", MAC:"ee:6f:4d:c0:4f:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:48.794642 containerd[1628]: 2026-04-16 23:54:48.788 [INFO][3823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-dnnwl" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--dnnwl-eth0" Apr 16 23:54:48.826172 containerd[1628]: time="2026-04-16T23:54:48.825599626Z" level=info msg="connecting to shim 29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1" address="unix:///run/containerd/s/2972e8c82b48efdfc6e07630018bc7c46f35f3978fe851357b7d323ad60bba2d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:48.849435 systemd[1]: Started cri-containerd-29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1.scope - libcontainer container 29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1. Apr 16 23:54:48.895665 containerd[1628]: time="2026-04-16T23:54:48.895612927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-dnnwl,Uid:57e87f13-ad1b-425d-b4a8-26c1596655e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1\"" Apr 16 23:54:48.897108 containerd[1628]: time="2026-04-16T23:54:48.897087438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:54:49.207084 systemd[1]: Created slice kubepods-besteffort-pod99b3436d_dcd4_48c2_849e_f8b1845df2b4.slice - libcontainer container kubepods-besteffort-pod99b3436d_dcd4_48c2_849e_f8b1845df2b4.slice. Apr 16 23:54:49.217104 containerd[1628]: time="2026-04-16T23:54:49.216782695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tscd9,Uid:99b3436d-dcd4-48c2-849e-f8b1845df2b4,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:49.345711 systemd-networkd[1508]: cali8d257dbe716: Link UP Apr 16 23:54:49.346771 systemd-networkd[1508]: cali8d257dbe716: Gained carrier Apr 16 23:54:49.355638 containerd[1628]: time="2026-04-16T23:54:49.355426177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549c89df8f-4nr8q,Uid:758506dc-7396-4241-8c8f-cb8d84b626aa,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:49.365768 containerd[1628]: 2026-04-16 23:54:49.256 [ERROR][3959] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:54:49.365768 containerd[1628]: 2026-04-16 23:54:49.273 [INFO][3959] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0 csi-node-driver- calico-system 99b3436d-dcd4-48c2-849e-f8b1845df2b4 682 0 2026-04-16 23:54:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 csi-node-driver-tscd9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d257dbe716 [] [] }} ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-" Apr 16 23:54:49.365768 containerd[1628]: 2026-04-16 23:54:49.273 [INFO][3959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.365768 containerd[1628]: 2026-04-16 23:54:49.305 [INFO][3972] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" HandleID="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Workload="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.311 [INFO][3972] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" HandleID="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Workload="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"csi-node-driver-tscd9", "timestamp":"2026-04-16 23:54:49.305069864 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000686000)} Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.311 [INFO][3972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.311 [INFO][3972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.311 [INFO][3972] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.313 [INFO][3972] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.316 [INFO][3972] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.320 [INFO][3972] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.321 [INFO][3972] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.366963 containerd[1628]: 2026-04-16 23:54:49.322 [INFO][3972] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.322 [INFO][3972] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.324 [INFO][3972] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.327 [INFO][3972] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.334 [INFO][3972] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.334 [INFO][3972] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.334 [INFO][3972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:49.367402 containerd[1628]: 2026-04-16 23:54:49.334 [INFO][3972] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" HandleID="k8s-pod-network.a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Workload="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.367505 containerd[1628]: 2026-04-16 23:54:49.338 [INFO][3959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b3436d-dcd4-48c2-849e-f8b1845df2b4", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"csi-node-driver-tscd9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d257dbe716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:49.367833 containerd[1628]: 2026-04-16 23:54:49.338 [INFO][3959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.130/32] ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.367833 containerd[1628]: 2026-04-16 23:54:49.338 [INFO][3959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d257dbe716 ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.367833 containerd[1628]: 2026-04-16 23:54:49.347 [INFO][3959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.367889 containerd[1628]: 2026-04-16 23:54:49.348 [INFO][3959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b3436d-dcd4-48c2-849e-f8b1845df2b4", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f", Pod:"csi-node-driver-tscd9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d257dbe716", MAC:"ae:d5:bb:23:92:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:49.368041 containerd[1628]: 2026-04-16 23:54:49.362 [INFO][3959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" Namespace="calico-system" Pod="csi-node-driver-tscd9" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-csi--node--driver--tscd9-eth0" Apr 16 23:54:49.378454 kubelet[2815]: I0416 23:54:49.378414 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5crjp" podStartSLOduration=3.185143957 podStartE2EDuration="23.378399171s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:27.168169391 +0000 UTC m=+17.060613502" lastFinishedPulling="2026-04-16 23:54:47.361424605 +0000 UTC m=+37.253868716" observedRunningTime="2026-04-16 23:54:49.363364061 +0000 UTC m=+39.255808172" watchObservedRunningTime="2026-04-16 23:54:49.378399171 +0000 UTC m=+39.270843292" Apr 16 23:54:49.383588 systemd[1]: run-netns-cni\x2d20f2a112\x2d3c1a\x2d0e07\x2dc31e\x2d11d1803ddb9c.mount: Deactivated successfully. Apr 16 23:54:49.400336 containerd[1628]: time="2026-04-16T23:54:49.399419718Z" level=info msg="connecting to shim a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f" address="unix:///run/containerd/s/b0bdc8fedd94275b97c4003682012132b710c762837f9aebc4c35dfad0cbbc48" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:49.424561 systemd[1]: Started cri-containerd-a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f.scope - libcontainer container a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f. Apr 16 23:54:49.454366 containerd[1628]: time="2026-04-16T23:54:49.454247454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tscd9,Uid:99b3436d-dcd4-48c2-849e-f8b1845df2b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f\"" Apr 16 23:54:49.470866 systemd-networkd[1508]: calif18e4c5f280: Link UP Apr 16 23:54:49.471089 systemd-networkd[1508]: calif18e4c5f280: Gained carrier Apr 16 23:54:49.482695 containerd[1628]: 2026-04-16 23:54:49.398 [ERROR][3982] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:54:49.482695 containerd[1628]: 2026-04-16 23:54:49.409 [INFO][3982] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0 calico-kube-controllers-549c89df8f- calico-system 758506dc-7396-4241-8c8f-cb8d84b626aa 853 0 2026-04-16 23:54:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:549c89df8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 calico-kube-controllers-549c89df8f-4nr8q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif18e4c5f280 [] [] }} ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-" Apr 16 23:54:49.482695 containerd[1628]: 2026-04-16 23:54:49.409 [INFO][3982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.482695 containerd[1628]: 2026-04-16 23:54:49.436 [INFO][4023] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" HandleID="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.444 [INFO][4023] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" HandleID="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277400), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"calico-kube-controllers-549c89df8f-4nr8q", "timestamp":"2026-04-16 23:54:49.436036001 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e51e0)} Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.445 [INFO][4023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.445 [INFO][4023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.445 [INFO][4023] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.447 [INFO][4023] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.451 [INFO][4023] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.455 [INFO][4023] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.456 [INFO][4023] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.482876 containerd[1628]: 2026-04-16 23:54:49.459 [INFO][4023] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.459 [INFO][4023] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.460 [INFO][4023] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227 Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.463 [INFO][4023] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.466 [INFO][4023] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.466 [INFO][4023] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.466 [INFO][4023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:49.483018 containerd[1628]: 2026-04-16 23:54:49.466 [INFO][4023] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" HandleID="k8s-pod-network.f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.483119 containerd[1628]: 2026-04-16 23:54:49.468 [INFO][3982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0", GenerateName:"calico-kube-controllers-549c89df8f-", Namespace:"calico-system", SelfLink:"", UID:"758506dc-7396-4241-8c8f-cb8d84b626aa", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549c89df8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"calico-kube-controllers-549c89df8f-4nr8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif18e4c5f280", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:49.483160 containerd[1628]: 2026-04-16 23:54:49.468 [INFO][3982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.131/32] ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.483160 containerd[1628]: 2026-04-16 23:54:49.468 [INFO][3982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif18e4c5f280 ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.483160 containerd[1628]: 2026-04-16 23:54:49.471 [INFO][3982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.483239 containerd[1628]: 2026-04-16 23:54:49.472 [INFO][3982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0", GenerateName:"calico-kube-controllers-549c89df8f-", Namespace:"calico-system", SelfLink:"", UID:"758506dc-7396-4241-8c8f-cb8d84b626aa", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549c89df8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227", Pod:"calico-kube-controllers-549c89df8f-4nr8q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif18e4c5f280", MAC:"16:ec:68:de:97:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:49.483284 containerd[1628]: 2026-04-16 23:54:49.480 [INFO][3982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" Namespace="calico-system" Pod="calico-kube-controllers-549c89df8f-4nr8q" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--kube--controllers--549c89df8f--4nr8q-eth0" Apr 16 23:54:49.502699 containerd[1628]: time="2026-04-16T23:54:49.502671119Z" level=info msg="connecting to shim f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227" address="unix:///run/containerd/s/fe4686ee6c45794f8d8e76bb1442f1cb13c84eaeeda227d10d8118aa25e2c6d1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:49.521414 systemd[1]: Started cri-containerd-f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227.scope - libcontainer container f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227. Apr 16 23:54:49.549250 kubelet[2815]: I0416 23:54:49.549221 2815 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7xt\" (UniqueName: \"kubernetes.io/projected/e1045738-997e-49eb-b29a-a871469ada71-kube-api-access-sn7xt\") pod \"e1045738-997e-49eb-b29a-a871469ada71\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " Apr 16 23:54:49.549250 kubelet[2815]: I0416 23:54:49.549248 2815 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-nginx-config\") pod \"e1045738-997e-49eb-b29a-a871469ada71\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " Apr 16 23:54:49.549579 kubelet[2815]: I0416 23:54:49.549261 2815 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-whisker-ca-bundle\") pod \"e1045738-997e-49eb-b29a-a871469ada71\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " Apr 16 23:54:49.549579 kubelet[2815]: I0416 23:54:49.549279 2815 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1045738-997e-49eb-b29a-a871469ada71-whisker-backend-key-pair\") pod \"e1045738-997e-49eb-b29a-a871469ada71\" (UID: \"e1045738-997e-49eb-b29a-a871469ada71\") " Apr 16 23:54:49.551788 kubelet[2815]: I0416 23:54:49.551765 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e1045738-997e-49eb-b29a-a871469ada71" (UID: "e1045738-997e-49eb-b29a-a871469ada71"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:54:49.553511 kubelet[2815]: I0416 23:54:49.553195 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1045738-997e-49eb-b29a-a871469ada71" (UID: "e1045738-997e-49eb-b29a-a871469ada71"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:54:49.555860 kubelet[2815]: I0416 23:54:49.555746 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1045738-997e-49eb-b29a-a871469ada71-kube-api-access-sn7xt" (OuterVolumeSpecName: "kube-api-access-sn7xt") pod "e1045738-997e-49eb-b29a-a871469ada71" (UID: "e1045738-997e-49eb-b29a-a871469ada71"). InnerVolumeSpecName "kube-api-access-sn7xt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:54:49.556373 kubelet[2815]: I0416 23:54:49.556352 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1045738-997e-49eb-b29a-a871469ada71-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1045738-997e-49eb-b29a-a871469ada71" (UID: "e1045738-997e-49eb-b29a-a871469ada71"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:54:49.561963 containerd[1628]: time="2026-04-16T23:54:49.561898829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549c89df8f-4nr8q,Uid:758506dc-7396-4241-8c8f-cb8d84b626aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227\"" Apr 16 23:54:49.650710 kubelet[2815]: I0416 23:54:49.650655 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-whisker-ca-bundle\") on node \"ci-4459-2-4-n-84256b4514\" DevicePath \"\"" Apr 16 23:54:49.650710 kubelet[2815]: I0416 23:54:49.650698 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1045738-997e-49eb-b29a-a871469ada71-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-84256b4514\" DevicePath \"\"" Apr 16 23:54:49.650710 kubelet[2815]: I0416 23:54:49.650714 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sn7xt\" (UniqueName: \"kubernetes.io/projected/e1045738-997e-49eb-b29a-a871469ada71-kube-api-access-sn7xt\") on node \"ci-4459-2-4-n-84256b4514\" DevicePath \"\"" Apr 16 23:54:49.650710 kubelet[2815]: I0416 23:54:49.650729 2815 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1045738-997e-49eb-b29a-a871469ada71-nginx-config\") on node \"ci-4459-2-4-n-84256b4514\" DevicePath \"\"" Apr 16 23:54:49.937535 systemd-networkd[1508]: calid26326f6b08: Gained IPv6LL Apr 16 23:54:50.207787 systemd[1]: Removed slice kubepods-besteffort-pode1045738_997e_49eb_b29a_a871469ada71.slice - libcontainer container kubepods-besteffort-pode1045738_997e_49eb_b29a_a871469ada71.slice. Apr 16 23:54:50.209093 kubelet[2815]: I0416 23:54:50.209070 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:54:50.359439 kubelet[2815]: I0416 23:54:50.359294 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:54:50.383557 systemd[1]: var-lib-kubelet-pods-e1045738\x2d997e\x2d49eb\x2db29a\x2da871469ada71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn7xt.mount: Deactivated successfully. Apr 16 23:54:50.383650 systemd[1]: var-lib-kubelet-pods-e1045738\x2d997e\x2d49eb\x2db29a\x2da871469ada71-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:54:50.438368 systemd[1]: Created slice kubepods-besteffort-podd563b35d_8fb5_40f5_af05_5241d312e622.slice - libcontainer container kubepods-besteffort-podd563b35d_8fb5_40f5_af05_5241d312e622.slice. Apr 16 23:54:50.554820 kubelet[2815]: I0416 23:54:50.554725 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d563b35d-8fb5-40f5-af05-5241d312e622-nginx-config\") pod \"whisker-d86df56f4-lfgw7\" (UID: \"d563b35d-8fb5-40f5-af05-5241d312e622\") " pod="calico-system/whisker-d86df56f4-lfgw7" Apr 16 23:54:50.554820 kubelet[2815]: I0416 23:54:50.554779 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d563b35d-8fb5-40f5-af05-5241d312e622-whisker-ca-bundle\") pod \"whisker-d86df56f4-lfgw7\" (UID: \"d563b35d-8fb5-40f5-af05-5241d312e622\") " pod="calico-system/whisker-d86df56f4-lfgw7" Apr 16 23:54:50.555846 kubelet[2815]: I0416 23:54:50.554876 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d563b35d-8fb5-40f5-af05-5241d312e622-whisker-backend-key-pair\") pod \"whisker-d86df56f4-lfgw7\" (UID: \"d563b35d-8fb5-40f5-af05-5241d312e622\") " pod="calico-system/whisker-d86df56f4-lfgw7" Apr 16 23:54:50.555846 kubelet[2815]: I0416 23:54:50.555098 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblzd\" (UniqueName: \"kubernetes.io/projected/d563b35d-8fb5-40f5-af05-5241d312e622-kube-api-access-bblzd\") pod \"whisker-d86df56f4-lfgw7\" (UID: \"d563b35d-8fb5-40f5-af05-5241d312e622\") " pod="calico-system/whisker-d86df56f4-lfgw7" Apr 16 23:54:50.706484 systemd-networkd[1508]: calif18e4c5f280: Gained IPv6LL Apr 16 23:54:50.747857 containerd[1628]: time="2026-04-16T23:54:50.747637032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d86df56f4-lfgw7,Uid:d563b35d-8fb5-40f5-af05-5241d312e622,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:50.769477 systemd-networkd[1508]: cali8d257dbe716: Gained IPv6LL Apr 16 23:54:50.875485 systemd-networkd[1508]: cali96709c5a77f: Link UP Apr 16 23:54:50.876433 systemd-networkd[1508]: cali96709c5a77f: Gained carrier Apr 16 23:54:50.887337 containerd[1628]: 2026-04-16 23:54:50.790 [ERROR][4250] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:54:50.887337 containerd[1628]: 2026-04-16 23:54:50.809 [INFO][4250] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0 whisker-d86df56f4- calico-system d563b35d-8fb5-40f5-af05-5241d312e622 902 0 2026-04-16 23:54:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d86df56f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 whisker-d86df56f4-lfgw7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96709c5a77f [] [] }} ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-" Apr 16 23:54:50.887337 containerd[1628]: 2026-04-16 23:54:50.809 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887337 containerd[1628]: 2026-04-16 23:54:50.838 [INFO][4263] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" HandleID="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.845 [INFO][4263] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" HandleID="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"whisker-d86df56f4-lfgw7", "timestamp":"2026-04-16 23:54:50.838254918 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001686e0)} Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.845 [INFO][4263] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.845 [INFO][4263] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.845 [INFO][4263] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.847 [INFO][4263] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.851 [INFO][4263] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.855 [INFO][4263] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.857 [INFO][4263] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887510 containerd[1628]: 2026-04-16 23:54:50.859 [INFO][4263] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.859 [INFO][4263] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.860 [INFO][4263] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8 Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.863 [INFO][4263] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.868 [INFO][4263] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.868 [INFO][4263] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.868 [INFO][4263] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:50.887672 containerd[1628]: 2026-04-16 23:54:50.868 [INFO][4263] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" HandleID="k8s-pod-network.edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Workload="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887837 containerd[1628]: 2026-04-16 23:54:50.870 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0", GenerateName:"whisker-d86df56f4-", Namespace:"calico-system", SelfLink:"", UID:"d563b35d-8fb5-40f5-af05-5241d312e622", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d86df56f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"whisker-d86df56f4-lfgw7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96709c5a77f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:50.887837 containerd[1628]: 2026-04-16 23:54:50.870 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.132/32] ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887916 containerd[1628]: 2026-04-16 23:54:50.870 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96709c5a77f ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887916 containerd[1628]: 2026-04-16 23:54:50.872 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.887963 containerd[1628]: 2026-04-16 23:54:50.872 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0", GenerateName:"whisker-d86df56f4-", Namespace:"calico-system", SelfLink:"", UID:"d563b35d-8fb5-40f5-af05-5241d312e622", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d86df56f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8", Pod:"whisker-d86df56f4-lfgw7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96709c5a77f", MAC:"ce:43:80:16:da:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:50.888019 containerd[1628]: 2026-04-16 23:54:50.884 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" Namespace="calico-system" Pod="whisker-d86df56f4-lfgw7" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-whisker--d86df56f4--lfgw7-eth0" Apr 16 23:54:50.908859 containerd[1628]: time="2026-04-16T23:54:50.908825487Z" level=info msg="connecting to shim edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8" address="unix:///run/containerd/s/ce5ff75475e5babce268cd222eab4c722369c48fbd1280bb005655cbd661e170" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:50.932426 systemd[1]: Started cri-containerd-edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8.scope - libcontainer container edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8. Apr 16 23:54:50.987913 containerd[1628]: time="2026-04-16T23:54:50.987872868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d86df56f4-lfgw7,Uid:d563b35d-8fb5-40f5-af05-5241d312e622,Namespace:calico-system,Attempt:0,} returns sandbox id \"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8\"" Apr 16 23:54:51.733431 systemd-networkd[1508]: vxlan.calico: Link UP Apr 16 23:54:51.733448 systemd-networkd[1508]: vxlan.calico: Gained carrier Apr 16 23:54:52.200480 kubelet[2815]: I0416 23:54:52.200409 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1045738-997e-49eb-b29a-a871469ada71" path="/var/lib/kubelet/pods/e1045738-997e-49eb-b29a-a871469ada71/volumes" Apr 16 23:54:52.824300 containerd[1628]: time="2026-04-16T23:54:52.824259161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:52.825353 containerd[1628]: time="2026-04-16T23:54:52.825328995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 23:54:52.826524 containerd[1628]: time="2026-04-16T23:54:52.826431560Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:52.828244 containerd[1628]: time="2026-04-16T23:54:52.828215051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:52.828754 containerd[1628]: time="2026-04-16T23:54:52.828633909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.931524511s" Apr 16 23:54:52.828754 containerd[1628]: time="2026-04-16T23:54:52.828655519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 23:54:52.829498 containerd[1628]: time="2026-04-16T23:54:52.829468964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:54:52.832083 containerd[1628]: time="2026-04-16T23:54:52.832060331Z" level=info msg="CreateContainer within sandbox \"29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:54:52.837719 containerd[1628]: time="2026-04-16T23:54:52.837699584Z" level=info msg="Container f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:52.844840 containerd[1628]: time="2026-04-16T23:54:52.844813119Z" level=info msg="CreateContainer within sandbox \"29cf1d833b33d737a96fa80761050a6cdf1cdb1ecbcde394ec5a0e48376f79d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667\"" Apr 16 23:54:52.845296 containerd[1628]: time="2026-04-16T23:54:52.845278587Z" level=info msg="StartContainer for \"f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667\"" Apr 16 23:54:52.846081 containerd[1628]: time="2026-04-16T23:54:52.846062413Z" level=info msg="connecting to shim f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667" address="unix:///run/containerd/s/2972e8c82b48efdfc6e07630018bc7c46f35f3978fe851357b7d323ad60bba2d" protocol=ttrpc version=3 Apr 16 23:54:52.862417 systemd[1]: Started cri-containerd-f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667.scope - libcontainer container f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667. Apr 16 23:54:52.882371 systemd-networkd[1508]: cali96709c5a77f: Gained IPv6LL Apr 16 23:54:52.925205 containerd[1628]: time="2026-04-16T23:54:52.925174353Z" level=info msg="StartContainer for \"f4c2a6812c87a9cf31733a6b24e60148d43db2d81ba8b7496763b6760aed0667\" returns successfully" Apr 16 23:54:53.204373 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Apr 16 23:54:53.389764 kubelet[2815]: I0416 23:54:53.389692 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c678cb499-dnnwl" podStartSLOduration=23.455273295 podStartE2EDuration="27.388130188s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:48.89674388 +0000 UTC m=+38.789187991" lastFinishedPulling="2026-04-16 23:54:52.829600773 +0000 UTC m=+42.722044884" observedRunningTime="2026-04-16 23:54:53.386737684 +0000 UTC m=+43.279181795" watchObservedRunningTime="2026-04-16 23:54:53.388130188 +0000 UTC m=+43.280574309" Apr 16 23:54:54.379276 kubelet[2815]: I0416 23:54:54.379216 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:54:54.842280 containerd[1628]: time="2026-04-16T23:54:54.842218609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:54.843342 containerd[1628]: time="2026-04-16T23:54:54.843324143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 23:54:54.844922 containerd[1628]: time="2026-04-16T23:54:54.844465689Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:54.846330 containerd[1628]: time="2026-04-16T23:54:54.846289811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:54.846756 containerd[1628]: time="2026-04-16T23:54:54.846645419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.017158565s" Apr 16 23:54:54.846756 containerd[1628]: time="2026-04-16T23:54:54.846665859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 23:54:54.848234 containerd[1628]: time="2026-04-16T23:54:54.848153282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:54:54.850076 containerd[1628]: time="2026-04-16T23:54:54.850060745Z" level=info msg="CreateContainer within sandbox \"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:54:54.861082 containerd[1628]: time="2026-04-16T23:54:54.860485198Z" level=info msg="Container deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:54.878706 containerd[1628]: time="2026-04-16T23:54:54.878674269Z" level=info msg="CreateContainer within sandbox \"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea\"" Apr 16 23:54:54.879148 containerd[1628]: time="2026-04-16T23:54:54.879136646Z" level=info msg="StartContainer for \"deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea\"" Apr 16 23:54:54.880456 containerd[1628]: time="2026-04-16T23:54:54.880433911Z" level=info msg="connecting to shim deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea" address="unix:///run/containerd/s/b0bdc8fedd94275b97c4003682012132b710c762837f9aebc4c35dfad0cbbc48" protocol=ttrpc version=3 Apr 16 23:54:54.901426 systemd[1]: Started cri-containerd-deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea.scope - libcontainer container deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea. Apr 16 23:54:54.951765 containerd[1628]: time="2026-04-16T23:54:54.951710278Z" level=info msg="StartContainer for \"deb2cd07e3102886393ce9bde060506662916da093be048580031aa9b07171ea\" returns successfully" Apr 16 23:54:58.152275 containerd[1628]: time="2026-04-16T23:54:58.152233755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:58.153259 containerd[1628]: time="2026-04-16T23:54:58.153243701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 23:54:58.154351 containerd[1628]: time="2026-04-16T23:54:58.154335887Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:58.156060 containerd[1628]: time="2026-04-16T23:54:58.156032871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:54:58.156708 containerd[1628]: time="2026-04-16T23:54:58.156399460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.308114568s" Apr 16 23:54:58.156708 containerd[1628]: time="2026-04-16T23:54:58.156420350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 23:54:58.157624 containerd[1628]: time="2026-04-16T23:54:58.157586145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:54:58.169090 containerd[1628]: time="2026-04-16T23:54:58.169060876Z" level=info msg="CreateContainer within sandbox \"f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:54:58.184205 containerd[1628]: time="2026-04-16T23:54:58.183548246Z" level=info msg="Container 619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:54:58.191948 containerd[1628]: time="2026-04-16T23:54:58.191916336Z" level=info msg="CreateContainer within sandbox \"f6a946b3d1329782f3e03487776cffe2be81cb165c13552a36592f5bbdae2227\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5\"" Apr 16 23:54:58.193255 containerd[1628]: time="2026-04-16T23:54:58.193225262Z" level=info msg="StartContainer for \"619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5\"" Apr 16 23:54:58.194441 containerd[1628]: time="2026-04-16T23:54:58.194370797Z" level=info msg="connecting to shim 619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5" address="unix:///run/containerd/s/fe4686ee6c45794f8d8e76bb1442f1cb13c84eaeeda227d10d8118aa25e2c6d1" protocol=ttrpc version=3 Apr 16 23:54:58.213417 systemd[1]: Started cri-containerd-619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5.scope - libcontainer container 619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5. Apr 16 23:54:58.255842 containerd[1628]: time="2026-04-16T23:54:58.255782193Z" level=info msg="StartContainer for \"619ef00b151a45efb8a9af8842beb4446b21f1aec3570e97e28295dab213ffd5\" returns successfully" Apr 16 23:54:58.423237 kubelet[2815]: I0416 23:54:58.423015 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-549c89df8f-4nr8q" podStartSLOduration=23.829189715 podStartE2EDuration="32.422996162s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:49.563166621 +0000 UTC m=+39.455610732" lastFinishedPulling="2026-04-16 23:54:58.156973068 +0000 UTC m=+48.049417179" observedRunningTime="2026-04-16 23:54:58.422590113 +0000 UTC m=+48.315034254" watchObservedRunningTime="2026-04-16 23:54:58.422996162 +0000 UTC m=+48.315440303" Apr 16 23:54:59.198748 containerd[1628]: time="2026-04-16T23:54:59.198663789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6ddxr,Uid:e4f842a3-14cd-4a8b-9d17-87a0aab8d812,Namespace:calico-system,Attempt:0,}" Apr 16 23:54:59.349121 systemd-networkd[1508]: caliddd93f0a6c6: Link UP Apr 16 23:54:59.350051 systemd-networkd[1508]: caliddd93f0a6c6: Gained carrier Apr 16 23:54:59.360762 containerd[1628]: 2026-04-16 23:54:59.275 [INFO][4657] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0 goldmane-cccfbd5cf- calico-system e4f842a3-14cd-4a8b-9d17-87a0aab8d812 829 0 2026-04-16 23:54:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 goldmane-cccfbd5cf-6ddxr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliddd93f0a6c6 [] [] }} ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-" Apr 16 23:54:59.360762 containerd[1628]: 2026-04-16 23:54:59.276 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.360762 containerd[1628]: 2026-04-16 23:54:59.307 [INFO][4669] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" HandleID="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Workload="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.312 [INFO][4669] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" HandleID="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Workload="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"goldmane-cccfbd5cf-6ddxr", "timestamp":"2026-04-16 23:54:59.30739471 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057af20)} Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.312 [INFO][4669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.312 [INFO][4669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.312 [INFO][4669] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.315 [INFO][4669] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.320 [INFO][4669] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.324 [INFO][4669] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.326 [INFO][4669] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361411 containerd[1628]: 2026-04-16 23:54:59.328 [INFO][4669] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.328 [INFO][4669] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.331 [INFO][4669] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.336 [INFO][4669] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.342 [INFO][4669] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.342 [INFO][4669] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" host="ci-4459-2-4-n-84256b4514" Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.342 [INFO][4669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:54:59.361653 containerd[1628]: 2026-04-16 23:54:59.342 [INFO][4669] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" HandleID="k8s-pod-network.d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Workload="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.361769 containerd[1628]: 2026-04-16 23:54:59.345 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e4f842a3-14cd-4a8b-9d17-87a0aab8d812", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"goldmane-cccfbd5cf-6ddxr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddd93f0a6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:59.361810 containerd[1628]: 2026-04-16 23:54:59.345 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.133/32] ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.361810 containerd[1628]: 2026-04-16 23:54:59.345 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddd93f0a6c6 ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.361810 containerd[1628]: 2026-04-16 23:54:59.347 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.361861 containerd[1628]: 2026-04-16 23:54:59.347 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e4f842a3-14cd-4a8b-9d17-87a0aab8d812", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d", Pod:"goldmane-cccfbd5cf-6ddxr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddd93f0a6c6", MAC:"6e:b1:38:ed:fb:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:54:59.361898 containerd[1628]: 2026-04-16 23:54:59.356 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6ddxr" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-goldmane--cccfbd5cf--6ddxr-eth0" Apr 16 23:54:59.388367 containerd[1628]: time="2026-04-16T23:54:59.387924346Z" level=info msg="connecting to shim d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d" address="unix:///run/containerd/s/26b0dd3f10dd5abadca7405f7bc42280a00bf6c35dae404e9da6b978dbe06036" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:54:59.421513 systemd[1]: Started cri-containerd-d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d.scope - libcontainer container d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d. Apr 16 23:54:59.472839 containerd[1628]: time="2026-04-16T23:54:59.472226079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6ddxr,Uid:e4f842a3-14cd-4a8b-9d17-87a0aab8d812,Namespace:calico-system,Attempt:0,} returns sandbox id \"d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d\"" Apr 16 23:55:00.090806 containerd[1628]: time="2026-04-16T23:55:00.090765268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:00.091778 containerd[1628]: time="2026-04-16T23:55:00.091691595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 23:55:00.092943 containerd[1628]: time="2026-04-16T23:55:00.092866232Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:00.094788 containerd[1628]: time="2026-04-16T23:55:00.094759946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:00.095291 containerd[1628]: time="2026-04-16T23:55:00.095107675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.93748162s" Apr 16 23:55:00.095291 containerd[1628]: time="2026-04-16T23:55:00.095129385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 23:55:00.096340 containerd[1628]: time="2026-04-16T23:55:00.096264542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:55:00.098627 containerd[1628]: time="2026-04-16T23:55:00.098605414Z" level=info msg="CreateContainer within sandbox \"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:55:00.106263 containerd[1628]: time="2026-04-16T23:55:00.105463783Z" level=info msg="Container 7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:00.122002 containerd[1628]: time="2026-04-16T23:55:00.121972881Z" level=info msg="CreateContainer within sandbox \"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861\"" Apr 16 23:55:00.122477 containerd[1628]: time="2026-04-16T23:55:00.122457760Z" level=info msg="StartContainer for \"7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861\"" Apr 16 23:55:00.123671 containerd[1628]: time="2026-04-16T23:55:00.123642186Z" level=info msg="connecting to shim 7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861" address="unix:///run/containerd/s/ce5ff75475e5babce268cd222eab4c722369c48fbd1280bb005655cbd661e170" protocol=ttrpc version=3 Apr 16 23:55:00.145436 systemd[1]: Started cri-containerd-7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861.scope - libcontainer container 7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861. Apr 16 23:55:00.200541 containerd[1628]: time="2026-04-16T23:55:00.200501227Z" level=info msg="StartContainer for \"7ec76633d899641ed5e4294dec1a629aedb180379f9ba13c817adaa66b657861\" returns successfully" Apr 16 23:55:01.073623 systemd-networkd[1508]: caliddd93f0a6c6: Gained IPv6LL Apr 16 23:55:02.200468 containerd[1628]: time="2026-04-16T23:55:02.199934290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-bdvmw,Uid:f0b43771-567b-4f37-b976-f139d1221148,Namespace:calico-system,Attempt:0,}" Apr 16 23:55:02.203675 containerd[1628]: time="2026-04-16T23:55:02.203627840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpprd,Uid:9123780b-75f4-4ed3-8d85-17e9f594baa9,Namespace:kube-system,Attempt:0,}" Apr 16 23:55:02.321772 systemd-networkd[1508]: calid9b6446af1f: Link UP Apr 16 23:55:02.322812 systemd-networkd[1508]: calid9b6446af1f: Gained carrier Apr 16 23:55:02.334611 containerd[1628]: 2026-04-16 23:55:02.260 [INFO][4798] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0 calico-apiserver-7c678cb499- calico-system f0b43771-567b-4f37-b976-f139d1221148 828 0 2026-04-16 23:54:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c678cb499 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 calico-apiserver-7c678cb499-bdvmw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid9b6446af1f [] [] }} ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-" Apr 16 23:55:02.334611 containerd[1628]: 2026-04-16 23:55:02.260 [INFO][4798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.334611 containerd[1628]: 2026-04-16 23:55:02.285 [INFO][4824] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" HandleID="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.294 [INFO][4824] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" HandleID="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fbe20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"calico-apiserver-7c678cb499-bdvmw", "timestamp":"2026-04-16 23:55:02.285070023 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039a2c0)} Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.294 [INFO][4824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.294 [INFO][4824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.294 [INFO][4824] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.297 [INFO][4824] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.301 [INFO][4824] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.304 [INFO][4824] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.305 [INFO][4824] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.334765 containerd[1628]: 2026-04-16 23:55:02.307 [INFO][4824] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.307 [INFO][4824] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.308 [INFO][4824] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8 Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.311 [INFO][4824] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.316 [INFO][4824] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.317 [INFO][4824] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.317 [INFO][4824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:55:02.335205 containerd[1628]: 2026-04-16 23:55:02.317 [INFO][4824] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" HandleID="k8s-pod-network.17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Workload="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.335506 containerd[1628]: 2026-04-16 23:55:02.318 [INFO][4798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0", GenerateName:"calico-apiserver-7c678cb499-", Namespace:"calico-system", SelfLink:"", UID:"f0b43771-567b-4f37-b976-f139d1221148", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c678cb499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"calico-apiserver-7c678cb499-bdvmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9b6446af1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:02.335554 containerd[1628]: 2026-04-16 23:55:02.318 [INFO][4798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.134/32] ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.335554 containerd[1628]: 2026-04-16 23:55:02.318 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9b6446af1f ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.335554 containerd[1628]: 2026-04-16 23:55:02.322 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.335712 containerd[1628]: 2026-04-16 23:55:02.323 [INFO][4798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0", GenerateName:"calico-apiserver-7c678cb499-", Namespace:"calico-system", SelfLink:"", UID:"f0b43771-567b-4f37-b976-f139d1221148", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c678cb499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8", Pod:"calico-apiserver-7c678cb499-bdvmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9b6446af1f", MAC:"b2:32:5b:0f:55:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:02.335885 containerd[1628]: 2026-04-16 23:55:02.332 [INFO][4798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" Namespace="calico-system" Pod="calico-apiserver-7c678cb499-bdvmw" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-calico--apiserver--7c678cb499--bdvmw-eth0" Apr 16 23:55:02.363328 containerd[1628]: time="2026-04-16T23:55:02.360464723Z" level=info msg="connecting to shim 17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8" address="unix:///run/containerd/s/a45b72f2146fc0745ae92e0442e015ca7b86406f31df23925064c26d6326d80d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:02.392421 systemd[1]: Started cri-containerd-17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8.scope - libcontainer container 17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8. Apr 16 23:55:02.430569 systemd-networkd[1508]: calid11fef67188: Link UP Apr 16 23:55:02.431374 systemd-networkd[1508]: calid11fef67188: Gained carrier Apr 16 23:55:02.452859 containerd[1628]: 2026-04-16 23:55:02.266 [INFO][4803] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0 coredns-66bc5c9577- kube-system 9123780b-75f4-4ed3-8d85-17e9f594baa9 830 0 2026-04-16 23:54:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 coredns-66bc5c9577-qpprd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid11fef67188 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-" Apr 16 23:55:02.452859 containerd[1628]: 2026-04-16 23:55:02.267 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.452859 containerd[1628]: 2026-04-16 23:55:02.291 [INFO][4829] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" HandleID="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.297 [INFO][4829] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" HandleID="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"coredns-66bc5c9577-qpprd", "timestamp":"2026-04-16 23:55:02.291763524 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004414a0)} Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.297 [INFO][4829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.317 [INFO][4829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.317 [INFO][4829] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.399 [INFO][4829] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.404 [INFO][4829] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.408 [INFO][4829] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.409 [INFO][4829] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453010 containerd[1628]: 2026-04-16 23:55:02.411 [INFO][4829] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.411 [INFO][4829] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.413 [INFO][4829] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281 Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.416 [INFO][4829] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.421 [INFO][4829] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.135/26] block=192.168.105.128/26 handle="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.421 [INFO][4829] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.135/26] handle="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.421 [INFO][4829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:55:02.453193 containerd[1628]: 2026-04-16 23:55:02.421 [INFO][4829] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.135/26] IPv6=[] ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" HandleID="k8s-pod-network.14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.453303 containerd[1628]: 2026-04-16 23:55:02.425 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9123780b-75f4-4ed3-8d85-17e9f594baa9", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"coredns-66bc5c9577-qpprd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid11fef67188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:02.453303 containerd[1628]: 2026-04-16 23:55:02.425 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.135/32] ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.453303 containerd[1628]: 2026-04-16 23:55:02.425 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid11fef67188 ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.453303 containerd[1628]: 2026-04-16 23:55:02.430 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.453303 containerd[1628]: 2026-04-16 23:55:02.430 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9123780b-75f4-4ed3-8d85-17e9f594baa9", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281", Pod:"coredns-66bc5c9577-qpprd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid11fef67188", MAC:"c6:47:76:29:43:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:02.453468 containerd[1628]: 2026-04-16 23:55:02.446 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" Namespace="kube-system" Pod="coredns-66bc5c9577-qpprd" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--qpprd-eth0" Apr 16 23:55:02.461438 containerd[1628]: time="2026-04-16T23:55:02.460670993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c678cb499-bdvmw,Uid:f0b43771-567b-4f37-b976-f139d1221148,Namespace:calico-system,Attempt:0,} returns sandbox id \"17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8\"" Apr 16 23:55:02.467154 containerd[1628]: time="2026-04-16T23:55:02.467113756Z" level=info msg="CreateContainer within sandbox \"17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:55:02.477518 containerd[1628]: time="2026-04-16T23:55:02.477455946Z" level=info msg="Container d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:02.488172 containerd[1628]: time="2026-04-16T23:55:02.487909148Z" level=info msg="connecting to shim 14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281" address="unix:///run/containerd/s/90c686a105a19a2aa552aea35af1845a73d5f8c91a55ca8bc849ef4b29c005c3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:02.488172 containerd[1628]: time="2026-04-16T23:55:02.488040117Z" level=info msg="CreateContainer within sandbox \"17ed40579058809058d8133fd8fd3f9fc740fcd1a811d30ba52b1cd7d2a6f8b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465\"" Apr 16 23:55:02.489182 containerd[1628]: time="2026-04-16T23:55:02.489169635Z" level=info msg="StartContainer for \"d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465\"" Apr 16 23:55:02.490965 containerd[1628]: time="2026-04-16T23:55:02.490949909Z" level=info msg="connecting to shim d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465" address="unix:///run/containerd/s/a45b72f2146fc0745ae92e0442e015ca7b86406f31df23925064c26d6326d80d" protocol=ttrpc version=3 Apr 16 23:55:02.514422 systemd[1]: Started cri-containerd-d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465.scope - libcontainer container d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465. Apr 16 23:55:02.519181 systemd[1]: Started cri-containerd-14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281.scope - libcontainer container 14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281. Apr 16 23:55:02.565635 containerd[1628]: time="2026-04-16T23:55:02.565496732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qpprd,Uid:9123780b-75f4-4ed3-8d85-17e9f594baa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281\"" Apr 16 23:55:02.570031 containerd[1628]: time="2026-04-16T23:55:02.569919429Z" level=info msg="CreateContainer within sandbox \"14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:55:02.580040 containerd[1628]: time="2026-04-16T23:55:02.580010742Z" level=info msg="Container f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:02.585327 containerd[1628]: time="2026-04-16T23:55:02.585294796Z" level=info msg="CreateContainer within sandbox \"14a43f9a294d51824e03368a16c8b0362bed95e8d716d43e0b33b6974eee3281\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076\"" Apr 16 23:55:02.588091 containerd[1628]: time="2026-04-16T23:55:02.586859772Z" level=info msg="StartContainer for \"f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076\"" Apr 16 23:55:02.588277 containerd[1628]: time="2026-04-16T23:55:02.588263288Z" level=info msg="connecting to shim f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076" address="unix:///run/containerd/s/90c686a105a19a2aa552aea35af1845a73d5f8c91a55ca8bc849ef4b29c005c3" protocol=ttrpc version=3 Apr 16 23:55:02.605433 systemd[1]: Started cri-containerd-f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076.scope - libcontainer container f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076. Apr 16 23:55:02.611216 containerd[1628]: time="2026-04-16T23:55:02.611183835Z" level=info msg="StartContainer for \"d17a1c7fd8c2acb708e296d53327a71ca1463762fd70996fdaac9b8bd1fa4465\" returns successfully" Apr 16 23:55:02.641016 containerd[1628]: time="2026-04-16T23:55:02.640969351Z" level=info msg="StartContainer for \"f7dc5dbaa34d20d99b7eadc75264e8d326f155aa46480612756244759ca83076\" returns successfully" Apr 16 23:55:03.201739 containerd[1628]: time="2026-04-16T23:55:03.200872632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-76hxx,Uid:eedea204-8286-4b2e-b09c-075059e45144,Namespace:kube-system,Attempt:0,}" Apr 16 23:55:03.306035 systemd-networkd[1508]: cali9620243eb99: Link UP Apr 16 23:55:03.307041 systemd-networkd[1508]: cali9620243eb99: Gained carrier Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.248 [INFO][5040] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0 coredns-66bc5c9577- kube-system eedea204-8286-4b2e-b09c-075059e45144 827 0 2026-04-16 23:54:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-84256b4514 coredns-66bc5c9577-76hxx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9620243eb99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.248 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.270 [INFO][5051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" HandleID="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.277 [INFO][5051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" HandleID="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-84256b4514", "pod":"coredns-66bc5c9577-76hxx", "timestamp":"2026-04-16 23:55:03.270970337 +0000 UTC"}, Hostname:"ci-4459-2-4-n-84256b4514", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003acf20)} Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.277 [INFO][5051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.277 [INFO][5051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.277 [INFO][5051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-84256b4514' Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.279 [INFO][5051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.284 [INFO][5051] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.287 [INFO][5051] ipam/ipam.go 526: Trying affinity for 192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.289 [INFO][5051] ipam/ipam.go 160: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.290 [INFO][5051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.291 [INFO][5051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.292 [INFO][5051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.296 [INFO][5051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.300 [INFO][5051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.105.136/26] block=192.168.105.128/26 handle="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.300 [INFO][5051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.105.136/26] handle="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" host="ci-4459-2-4-n-84256b4514" Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.300 [INFO][5051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:55:03.321092 containerd[1628]: 2026-04-16 23:55:03.300 [INFO][5051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.105.136/26] IPv6=[] ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" HandleID="k8s-pod-network.28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Workload="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321691 containerd[1628]: 2026-04-16 23:55:03.303 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eedea204-8286-4b2e-b09c-075059e45144", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"", Pod:"coredns-66bc5c9577-76hxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9620243eb99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:03.321691 containerd[1628]: 2026-04-16 23:55:03.303 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.136/32] ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321691 containerd[1628]: 2026-04-16 23:55:03.303 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9620243eb99 ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321691 containerd[1628]: 2026-04-16 23:55:03.308 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.321691 containerd[1628]: 2026-04-16 23:55:03.308 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eedea204-8286-4b2e-b09c-075059e45144", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-84256b4514", ContainerID:"28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b", Pod:"coredns-66bc5c9577-76hxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9620243eb99", MAC:"82:6b:d6:51:95:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:55:03.322479 containerd[1628]: 2026-04-16 23:55:03.318 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" Namespace="kube-system" Pod="coredns-66bc5c9577-76hxx" WorkloadEndpoint="ci--4459--2--4--n--84256b4514-k8s-coredns--66bc5c9577--76hxx-eth0" Apr 16 23:55:03.349382 containerd[1628]: time="2026-04-16T23:55:03.349343110Z" level=info msg="connecting to shim 28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b" address="unix:///run/containerd/s/bd6dd212fa75ffabe87fd7442f1b768b8086cf6076a27a84d8022a27be6fc92f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:03.379172 systemd-networkd[1508]: calid9b6446af1f: Gained IPv6LL Apr 16 23:55:03.384565 systemd[1]: Started cri-containerd-28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b.scope - libcontainer container 28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b. Apr 16 23:55:03.439772 kubelet[2815]: I0416 23:55:03.439732 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qpprd" podStartSLOduration=47.439710871 podStartE2EDuration="47.439710871s" podCreationTimestamp="2026-04-16 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:03.428093942 +0000 UTC m=+53.320538063" watchObservedRunningTime="2026-04-16 23:55:03.439710871 +0000 UTC m=+53.332154992" Apr 16 23:55:03.460160 containerd[1628]: time="2026-04-16T23:55:03.460064358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-76hxx,Uid:eedea204-8286-4b2e-b09c-075059e45144,Namespace:kube-system,Attempt:0,} returns sandbox id \"28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b\"" Apr 16 23:55:03.466487 containerd[1628]: time="2026-04-16T23:55:03.466457991Z" level=info msg="CreateContainer within sandbox \"28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:55:03.476919 containerd[1628]: time="2026-04-16T23:55:03.476878714Z" level=info msg="Container 65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:03.486170 containerd[1628]: time="2026-04-16T23:55:03.486137439Z" level=info msg="CreateContainer within sandbox \"28cefef17a84d360625d1cbf58949dcc47a1ff0a10e92fe0873464898103299b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0\"" Apr 16 23:55:03.486827 containerd[1628]: time="2026-04-16T23:55:03.486801588Z" level=info msg="StartContainer for \"65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0\"" Apr 16 23:55:03.487528 containerd[1628]: time="2026-04-16T23:55:03.487509236Z" level=info msg="connecting to shim 65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0" address="unix:///run/containerd/s/bd6dd212fa75ffabe87fd7442f1b768b8086cf6076a27a84d8022a27be6fc92f" protocol=ttrpc version=3 Apr 16 23:55:03.516489 systemd[1]: Started cri-containerd-65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0.scope - libcontainer container 65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0. Apr 16 23:55:03.543489 containerd[1628]: time="2026-04-16T23:55:03.543450028Z" level=info msg="StartContainer for \"65412a22c61c3be7b4d1cae921c2b885ae9fd1551587b3dd480e1c1259ea20d0\" returns successfully" Apr 16 23:55:03.762616 systemd-networkd[1508]: calid11fef67188: Gained IPv6LL Apr 16 23:55:03.949745 containerd[1628]: time="2026-04-16T23:55:03.949693857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:03.950397 containerd[1628]: time="2026-04-16T23:55:03.950366645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 23:55:03.951542 containerd[1628]: time="2026-04-16T23:55:03.951510072Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:03.953845 containerd[1628]: time="2026-04-16T23:55:03.953570146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:03.954573 containerd[1628]: time="2026-04-16T23:55:03.954552234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.858255162s" Apr 16 23:55:03.954603 containerd[1628]: time="2026-04-16T23:55:03.954578334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 23:55:03.958386 containerd[1628]: time="2026-04-16T23:55:03.958354634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:55:03.959791 containerd[1628]: time="2026-04-16T23:55:03.959769790Z" level=info msg="CreateContainer within sandbox \"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:55:03.967020 containerd[1628]: time="2026-04-16T23:55:03.966997951Z" level=info msg="Container 3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:03.973592 containerd[1628]: time="2026-04-16T23:55:03.973564394Z" level=info msg="CreateContainer within sandbox \"a97fc013155e3b0e691aaa71637671d86acf6c31d77c4b2d09972ca8ffb0377f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15\"" Apr 16 23:55:03.974171 containerd[1628]: time="2026-04-16T23:55:03.973954533Z" level=info msg="StartContainer for \"3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15\"" Apr 16 23:55:03.975050 containerd[1628]: time="2026-04-16T23:55:03.975034880Z" level=info msg="connecting to shim 3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15" address="unix:///run/containerd/s/b0bdc8fedd94275b97c4003682012132b710c762837f9aebc4c35dfad0cbbc48" protocol=ttrpc version=3 Apr 16 23:55:03.989452 systemd[1]: Started cri-containerd-3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15.scope - libcontainer container 3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15. Apr 16 23:55:04.046587 containerd[1628]: time="2026-04-16T23:55:04.046533228Z" level=info msg="StartContainer for \"3fe10fd426c60982911b96a448d7f4d26e1e17b531b1d703184cdc9458132a15\" returns successfully" Apr 16 23:55:04.270882 kubelet[2815]: I0416 23:55:04.270818 2815 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:55:04.272891 kubelet[2815]: I0416 23:55:04.272823 2815 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:55:04.432770 kubelet[2815]: I0416 23:55:04.432056 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:55:04.453094 kubelet[2815]: I0416 23:55:04.453010 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tscd9" podStartSLOduration=23.951957818 podStartE2EDuration="38.452992282s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:49.456086423 +0000 UTC m=+39.348530544" lastFinishedPulling="2026-04-16 23:55:03.957120897 +0000 UTC m=+53.849565008" observedRunningTime="2026-04-16 23:55:04.451254236 +0000 UTC m=+54.343698387" watchObservedRunningTime="2026-04-16 23:55:04.452992282 +0000 UTC m=+54.345436433" Apr 16 23:55:04.455274 kubelet[2815]: I0416 23:55:04.455175 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c678cb499-bdvmw" podStartSLOduration=38.455161706 podStartE2EDuration="38.455161706s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:03.45913911 +0000 UTC m=+53.351583221" watchObservedRunningTime="2026-04-16 23:55:04.455161706 +0000 UTC m=+54.347605857" Apr 16 23:55:04.721584 systemd-networkd[1508]: cali9620243eb99: Gained IPv6LL Apr 16 23:55:07.327046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129364168.mount: Deactivated successfully. Apr 16 23:55:07.625298 containerd[1628]: time="2026-04-16T23:55:07.625188546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:07.627082 containerd[1628]: time="2026-04-16T23:55:07.626814394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 23:55:07.627988 containerd[1628]: time="2026-04-16T23:55:07.627815911Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:07.629663 containerd[1628]: time="2026-04-16T23:55:07.629647938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:07.630074 containerd[1628]: time="2026-04-16T23:55:07.630045086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.671588473s" Apr 16 23:55:07.630074 containerd[1628]: time="2026-04-16T23:55:07.630068576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 23:55:07.631167 containerd[1628]: time="2026-04-16T23:55:07.631125094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:55:07.633713 containerd[1628]: time="2026-04-16T23:55:07.633650989Z" level=info msg="CreateContainer within sandbox \"d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:55:07.642511 containerd[1628]: time="2026-04-16T23:55:07.642409490Z" level=info msg="Container 020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:07.647010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858999277.mount: Deactivated successfully. Apr 16 23:55:07.659226 containerd[1628]: time="2026-04-16T23:55:07.659189604Z" level=info msg="CreateContainer within sandbox \"d249bd72950cbd00c3b7cc086874f8b8c3426758eff169b95bff2de591de273d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42\"" Apr 16 23:55:07.659684 containerd[1628]: time="2026-04-16T23:55:07.659664933Z" level=info msg="StartContainer for \"020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42\"" Apr 16 23:55:07.660494 containerd[1628]: time="2026-04-16T23:55:07.660476382Z" level=info msg="connecting to shim 020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42" address="unix:///run/containerd/s/26b0dd3f10dd5abadca7405f7bc42280a00bf6c35dae404e9da6b978dbe06036" protocol=ttrpc version=3 Apr 16 23:55:07.680426 systemd[1]: Started cri-containerd-020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42.scope - libcontainer container 020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42. Apr 16 23:55:07.722858 containerd[1628]: time="2026-04-16T23:55:07.722811189Z" level=info msg="StartContainer for \"020dfd34d36040e544871cfd5a4420e80ccbde081541dc6bad8e4a86a7779e42\" returns successfully" Apr 16 23:55:08.463787 kubelet[2815]: I0416 23:55:08.463526 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-6ddxr" podStartSLOduration=34.306749036 podStartE2EDuration="42.463506348s" podCreationTimestamp="2026-04-16 23:54:26 +0000 UTC" firstStartedPulling="2026-04-16 23:54:59.473903713 +0000 UTC m=+49.366347834" lastFinishedPulling="2026-04-16 23:55:07.630661025 +0000 UTC m=+57.523105146" observedRunningTime="2026-04-16 23:55:08.463200159 +0000 UTC m=+58.355644310" watchObservedRunningTime="2026-04-16 23:55:08.463506348 +0000 UTC m=+58.355950499" Apr 16 23:55:08.465904 kubelet[2815]: I0416 23:55:08.465233 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-76hxx" podStartSLOduration=52.465223415 podStartE2EDuration="52.465223415s" podCreationTimestamp="2026-04-16 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:04.470280509 +0000 UTC m=+54.362724650" watchObservedRunningTime="2026-04-16 23:55:08.465223415 +0000 UTC m=+58.357667556" Apr 16 23:55:09.255820 kubelet[2815]: I0416 23:55:09.255713 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:55:10.091191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944248095.mount: Deactivated successfully. Apr 16 23:55:10.108424 containerd[1628]: time="2026-04-16T23:55:10.108374800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:10.109501 containerd[1628]: time="2026-04-16T23:55:10.109415797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 23:55:10.110351 containerd[1628]: time="2026-04-16T23:55:10.110326016Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:10.112778 containerd[1628]: time="2026-04-16T23:55:10.112128532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:10.112778 containerd[1628]: time="2026-04-16T23:55:10.112651762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.481506068s" Apr 16 23:55:10.112778 containerd[1628]: time="2026-04-16T23:55:10.112683172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 23:55:10.119007 containerd[1628]: time="2026-04-16T23:55:10.118973880Z" level=info msg="CreateContainer within sandbox \"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:55:10.127462 containerd[1628]: time="2026-04-16T23:55:10.127441865Z" level=info msg="Container 6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:10.134878 containerd[1628]: time="2026-04-16T23:55:10.134397462Z" level=info msg="CreateContainer within sandbox \"edadae8eb0079824991a1b848bc87a44c8a23dc81af873f21b58c43863ebc1d8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b\"" Apr 16 23:55:10.135498 containerd[1628]: time="2026-04-16T23:55:10.135428921Z" level=info msg="StartContainer for \"6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b\"" Apr 16 23:55:10.136451 containerd[1628]: time="2026-04-16T23:55:10.136401108Z" level=info msg="connecting to shim 6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b" address="unix:///run/containerd/s/ce5ff75475e5babce268cd222eab4c722369c48fbd1280bb005655cbd661e170" protocol=ttrpc version=3 Apr 16 23:55:10.154557 systemd[1]: Started cri-containerd-6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b.scope - libcontainer container 6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b. Apr 16 23:55:10.213219 containerd[1628]: time="2026-04-16T23:55:10.213175258Z" level=info msg="StartContainer for \"6f029caa7ba3ed0a489a918a4fa5bfe4bed17ca7f5552ad0ec411358431e581b\" returns successfully" Apr 16 23:55:10.479286 kubelet[2815]: I0416 23:55:10.478886 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d86df56f4-lfgw7" podStartSLOduration=1.354042113 podStartE2EDuration="20.478522493s" podCreationTimestamp="2026-04-16 23:54:50 +0000 UTC" firstStartedPulling="2026-04-16 23:54:50.989013051 +0000 UTC m=+40.881457162" lastFinishedPulling="2026-04-16 23:55:10.113493431 +0000 UTC m=+60.005937542" observedRunningTime="2026-04-16 23:55:10.476722475 +0000 UTC m=+60.369166626" watchObservedRunningTime="2026-04-16 23:55:10.478522493 +0000 UTC m=+60.370966644" Apr 16 23:55:44.989285 systemd[1]: Started sshd@9-77.42.47.3:22-4.175.71.9:47040.service - OpenSSH per-connection server daemon (4.175.71.9:47040). Apr 16 23:55:45.195375 sshd[5476]: Accepted publickey for core from 4.175.71.9 port 47040 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:45.197562 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:45.205444 systemd-logind[1601]: New session 10 of user core. Apr 16 23:55:45.213564 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:55:45.371402 sshd[5490]: Connection closed by 4.175.71.9 port 47040 Apr 16 23:55:45.372349 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:45.378049 systemd-logind[1601]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:55:45.379231 systemd[1]: sshd@9-77.42.47.3:22-4.175.71.9:47040.service: Deactivated successfully. Apr 16 23:55:45.382965 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:55:45.385357 systemd-logind[1601]: Removed session 10. Apr 16 23:55:48.316229 kubelet[2815]: I0416 23:55:48.314859 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:55:50.415654 systemd[1]: Started sshd@10-77.42.47.3:22-4.175.71.9:58522.service - OpenSSH per-connection server daemon (4.175.71.9:58522). Apr 16 23:55:50.622063 sshd[5508]: Accepted publickey for core from 4.175.71.9 port 58522 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:50.624079 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:50.631628 systemd-logind[1601]: New session 11 of user core. Apr 16 23:55:50.637602 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:55:50.805571 sshd[5511]: Connection closed by 4.175.71.9 port 58522 Apr 16 23:55:50.806596 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:50.811541 systemd[1]: sshd@10-77.42.47.3:22-4.175.71.9:58522.service: Deactivated successfully. Apr 16 23:55:50.813669 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:55:50.814593 systemd-logind[1601]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:55:50.816583 systemd-logind[1601]: Removed session 11. Apr 16 23:55:55.845514 systemd[1]: Started sshd@11-77.42.47.3:22-4.175.71.9:36112.service - OpenSSH per-connection server daemon (4.175.71.9:36112). Apr 16 23:55:56.034844 sshd[5572]: Accepted publickey for core from 4.175.71.9 port 36112 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:56.037474 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:56.045703 systemd-logind[1601]: New session 12 of user core. Apr 16 23:55:56.054564 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:55:56.214082 sshd[5575]: Connection closed by 4.175.71.9 port 36112 Apr 16 23:55:56.215580 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:56.219421 systemd-logind[1601]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:55:56.219461 systemd[1]: sshd@11-77.42.47.3:22-4.175.71.9:36112.service: Deactivated successfully. Apr 16 23:55:56.221002 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:55:56.223817 systemd-logind[1601]: Removed session 12. Apr 16 23:56:01.261049 systemd[1]: Started sshd@12-77.42.47.3:22-4.175.71.9:36122.service - OpenSSH per-connection server daemon (4.175.71.9:36122). Apr 16 23:56:01.470996 sshd[5610]: Accepted publickey for core from 4.175.71.9 port 36122 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:01.475443 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:01.485424 systemd-logind[1601]: New session 13 of user core. Apr 16 23:56:01.493537 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:56:01.655651 sshd[5613]: Connection closed by 4.175.71.9 port 36122 Apr 16 23:56:01.656716 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:01.661938 systemd[1]: sshd@12-77.42.47.3:22-4.175.71.9:36122.service: Deactivated successfully. Apr 16 23:56:01.666862 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:56:01.672020 systemd-logind[1601]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:56:01.673787 systemd-logind[1601]: Removed session 13. Apr 16 23:56:06.702093 systemd[1]: Started sshd@13-77.42.47.3:22-4.175.71.9:35374.service - OpenSSH per-connection server daemon (4.175.71.9:35374). Apr 16 23:56:06.919225 sshd[5643]: Accepted publickey for core from 4.175.71.9 port 35374 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:06.926117 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:06.941778 systemd-logind[1601]: New session 14 of user core. Apr 16 23:56:06.950517 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:56:07.093052 sshd[5668]: Connection closed by 4.175.71.9 port 35374 Apr 16 23:56:07.094467 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:07.098159 systemd[1]: sshd@13-77.42.47.3:22-4.175.71.9:35374.service: Deactivated successfully. Apr 16 23:56:07.099855 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:56:07.101037 systemd-logind[1601]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:56:07.102610 systemd-logind[1601]: Removed session 14. Apr 16 23:56:07.133739 systemd[1]: Started sshd@14-77.42.47.3:22-4.175.71.9:35388.service - OpenSSH per-connection server daemon (4.175.71.9:35388). Apr 16 23:56:07.323578 sshd[5681]: Accepted publickey for core from 4.175.71.9 port 35388 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:07.326897 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:07.337401 systemd-logind[1601]: New session 15 of user core. Apr 16 23:56:07.343532 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:56:07.542554 sshd[5684]: Connection closed by 4.175.71.9 port 35388 Apr 16 23:56:07.543252 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:07.547415 systemd-logind[1601]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:56:07.547633 systemd[1]: sshd@14-77.42.47.3:22-4.175.71.9:35388.service: Deactivated successfully. Apr 16 23:56:07.549588 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:56:07.551228 systemd-logind[1601]: Removed session 15. Apr 16 23:56:07.581387 systemd[1]: Started sshd@15-77.42.47.3:22-4.175.71.9:35390.service - OpenSSH per-connection server daemon (4.175.71.9:35390). Apr 16 23:56:07.775095 sshd[5694]: Accepted publickey for core from 4.175.71.9 port 35390 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:07.776304 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:07.780978 systemd-logind[1601]: New session 16 of user core. Apr 16 23:56:07.789449 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:56:07.976554 sshd[5697]: Connection closed by 4.175.71.9 port 35390 Apr 16 23:56:07.977615 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:07.983744 systemd-logind[1601]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:56:07.984799 systemd[1]: sshd@15-77.42.47.3:22-4.175.71.9:35390.service: Deactivated successfully. Apr 16 23:56:07.986932 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:56:07.988155 systemd-logind[1601]: Removed session 16. Apr 16 23:56:13.024692 systemd[1]: Started sshd@16-77.42.47.3:22-4.175.71.9:35400.service - OpenSSH per-connection server daemon (4.175.71.9:35400). Apr 16 23:56:13.232383 sshd[5746]: Accepted publickey for core from 4.175.71.9 port 35400 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:13.236016 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:13.247620 systemd-logind[1601]: New session 17 of user core. Apr 16 23:56:13.257713 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:56:13.428525 sshd[5749]: Connection closed by 4.175.71.9 port 35400 Apr 16 23:56:13.429997 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:13.437518 systemd[1]: sshd@16-77.42.47.3:22-4.175.71.9:35400.service: Deactivated successfully. Apr 16 23:56:13.441598 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:56:13.444775 systemd-logind[1601]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:56:13.448418 systemd-logind[1601]: Removed session 17. Apr 16 23:56:13.471527 systemd[1]: Started sshd@17-77.42.47.3:22-4.175.71.9:35404.service - OpenSSH per-connection server daemon (4.175.71.9:35404). Apr 16 23:56:13.676947 sshd[5761]: Accepted publickey for core from 4.175.71.9 port 35404 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:13.679839 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:13.689357 systemd-logind[1601]: New session 18 of user core. Apr 16 23:56:13.697509 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:56:14.014362 sshd[5764]: Connection closed by 4.175.71.9 port 35404 Apr 16 23:56:14.017011 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:14.024637 systemd-logind[1601]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:56:14.025827 systemd[1]: sshd@17-77.42.47.3:22-4.175.71.9:35404.service: Deactivated successfully. Apr 16 23:56:14.030012 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:56:14.034259 systemd-logind[1601]: Removed session 18. Apr 16 23:56:14.062122 systemd[1]: Started sshd@18-77.42.47.3:22-4.175.71.9:35418.service - OpenSSH per-connection server daemon (4.175.71.9:35418). Apr 16 23:56:14.288754 sshd[5774]: Accepted publickey for core from 4.175.71.9 port 35418 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:14.291546 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:14.302451 systemd-logind[1601]: New session 19 of user core. Apr 16 23:56:14.309555 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:56:14.938265 sshd[5777]: Connection closed by 4.175.71.9 port 35418 Apr 16 23:56:14.939533 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:14.943368 systemd-logind[1601]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:56:14.943911 systemd[1]: sshd@18-77.42.47.3:22-4.175.71.9:35418.service: Deactivated successfully. Apr 16 23:56:14.945881 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:56:14.947363 systemd-logind[1601]: Removed session 19. Apr 16 23:56:14.976471 systemd[1]: Started sshd@19-77.42.47.3:22-4.175.71.9:35430.service - OpenSSH per-connection server daemon (4.175.71.9:35430). Apr 16 23:56:15.166098 sshd[5802]: Accepted publickey for core from 4.175.71.9 port 35430 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:15.169405 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:15.179083 systemd-logind[1601]: New session 20 of user core. Apr 16 23:56:15.183540 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:56:15.456700 sshd[5805]: Connection closed by 4.175.71.9 port 35430 Apr 16 23:56:15.459482 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:15.462280 systemd[1]: sshd@19-77.42.47.3:22-4.175.71.9:35430.service: Deactivated successfully. Apr 16 23:56:15.465993 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:56:15.469443 systemd-logind[1601]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:56:15.471770 systemd-logind[1601]: Removed session 20. Apr 16 23:56:15.497377 systemd[1]: Started sshd@20-77.42.47.3:22-4.175.71.9:50668.service - OpenSSH per-connection server daemon (4.175.71.9:50668). Apr 16 23:56:15.682518 sshd[5815]: Accepted publickey for core from 4.175.71.9 port 50668 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:15.685076 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:15.695574 systemd-logind[1601]: New session 21 of user core. Apr 16 23:56:15.703624 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 23:56:15.854401 sshd[5818]: Connection closed by 4.175.71.9 port 50668 Apr 16 23:56:15.855824 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:15.859772 systemd[1]: sshd@20-77.42.47.3:22-4.175.71.9:50668.service: Deactivated successfully. Apr 16 23:56:15.862174 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 23:56:15.863758 systemd-logind[1601]: Session 21 logged out. Waiting for processes to exit. Apr 16 23:56:15.865567 systemd-logind[1601]: Removed session 21. Apr 16 23:56:20.905723 systemd[1]: Started sshd@21-77.42.47.3:22-4.175.71.9:50680.service - OpenSSH per-connection server daemon (4.175.71.9:50680). Apr 16 23:56:21.107960 sshd[5836]: Accepted publickey for core from 4.175.71.9 port 50680 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:21.110624 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:21.119751 systemd-logind[1601]: New session 22 of user core. Apr 16 23:56:21.125556 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 23:56:21.279754 sshd[5840]: Connection closed by 4.175.71.9 port 50680 Apr 16 23:56:21.281762 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:21.288200 systemd-logind[1601]: Session 22 logged out. Waiting for processes to exit. Apr 16 23:56:21.289210 systemd[1]: sshd@21-77.42.47.3:22-4.175.71.9:50680.service: Deactivated successfully. Apr 16 23:56:21.292689 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 23:56:21.296267 systemd-logind[1601]: Removed session 22. Apr 16 23:56:26.322208 systemd[1]: Started sshd@22-77.42.47.3:22-4.175.71.9:59912.service - OpenSSH per-connection server daemon (4.175.71.9:59912). Apr 16 23:56:26.535510 sshd[5906]: Accepted publickey for core from 4.175.71.9 port 59912 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:56:26.538392 sshd-session[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:26.547405 systemd-logind[1601]: New session 23 of user core. Apr 16 23:56:26.554689 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 23:56:26.733179 sshd[5909]: Connection closed by 4.175.71.9 port 59912 Apr 16 23:56:26.734894 sshd-session[5906]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:26.740128 systemd[1]: sshd@22-77.42.47.3:22-4.175.71.9:59912.service: Deactivated successfully. Apr 16 23:56:26.744214 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 23:56:26.746066 systemd-logind[1601]: Session 23 logged out. Waiting for processes to exit. Apr 16 23:56:26.748142 systemd-logind[1601]: Removed session 23. Apr 16 23:57:15.666309 systemd[1]: cri-containerd-d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996.scope: Deactivated successfully. Apr 16 23:57:15.666960 systemd[1]: cri-containerd-d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996.scope: Consumed 9.612s CPU time, 135.9M memory peak, 1M read from disk. Apr 16 23:57:15.673198 containerd[1628]: time="2026-04-16T23:57:15.673058672Z" level=info msg="received container exit event container_id:\"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\" id:\"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\" pid:3146 exit_status:1 exited_at:{seconds:1776383835 nanos:672478223}" Apr 16 23:57:15.719308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996-rootfs.mount: Deactivated successfully. Apr 16 23:57:15.811752 kubelet[2815]: I0416 23:57:15.810856 2815 scope.go:117] "RemoveContainer" containerID="d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996" Apr 16 23:57:15.819371 containerd[1628]: time="2026-04-16T23:57:15.818815518Z" level=info msg="CreateContainer within sandbox \"fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 23:57:15.829301 containerd[1628]: time="2026-04-16T23:57:15.829240750Z" level=info msg="Container 21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:15.838590 containerd[1628]: time="2026-04-16T23:57:15.838542094Z" level=info msg="CreateContainer within sandbox \"fd8de1b75adfbc7a9fa4401a8d599371d2f58b2d59c8fc4e90c5f7e5a50080d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281\"" Apr 16 23:57:15.839452 containerd[1628]: time="2026-04-16T23:57:15.839413073Z" level=info msg="StartContainer for \"21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281\"" Apr 16 23:57:15.840211 containerd[1628]: time="2026-04-16T23:57:15.840178871Z" level=info msg="connecting to shim 21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281" address="unix:///run/containerd/s/84aa3272d1e44a2a99fea56e17357720bba755f1d0f5afdf3b0614bc9b9f60d1" protocol=ttrpc version=3 Apr 16 23:57:15.863555 systemd[1]: Started cri-containerd-21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281.scope - libcontainer container 21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281. Apr 16 23:57:15.898892 containerd[1628]: time="2026-04-16T23:57:15.898860519Z" level=info msg="StartContainer for \"21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281\" returns successfully" Apr 16 23:57:16.117024 kubelet[2815]: E0416 23:57:16.116653 2815 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44188->10.0.0.2:2379: read: connection timed out" Apr 16 23:57:17.168004 systemd[1]: cri-containerd-ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5.scope: Deactivated successfully. Apr 16 23:57:17.170480 systemd[1]: cri-containerd-ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5.scope: Consumed 3.219s CPU time, 68.2M memory peak, 236K read from disk. Apr 16 23:57:17.177664 containerd[1628]: time="2026-04-16T23:57:17.177586849Z" level=info msg="received container exit event container_id:\"ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5\" id:\"ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5\" pid:2660 exit_status:1 exited_at:{seconds:1776383837 nanos:176465661}" Apr 16 23:57:17.209223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5-rootfs.mount: Deactivated successfully. Apr 16 23:57:17.822101 kubelet[2815]: I0416 23:57:17.822013 2815 scope.go:117] "RemoveContainer" containerID="ac6ff2a957c06885447b8bd98ae7d376bfc5176321ba177281d2e6f5ca6fe2b5" Apr 16 23:57:17.825236 containerd[1628]: time="2026-04-16T23:57:17.825159435Z" level=info msg="CreateContainer within sandbox \"ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 23:57:17.840629 containerd[1628]: time="2026-04-16T23:57:17.840569959Z" level=info msg="Container 8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:17.850669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398565576.mount: Deactivated successfully. Apr 16 23:57:17.858123 containerd[1628]: time="2026-04-16T23:57:17.858061489Z" level=info msg="CreateContainer within sandbox \"ea8f48a8155545018128905e8c74a2d9bfa0530f3d5933742cbe0cbc802d987e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932\"" Apr 16 23:57:17.861424 containerd[1628]: time="2026-04-16T23:57:17.859674476Z" level=info msg="StartContainer for \"8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932\"" Apr 16 23:57:17.861739 containerd[1628]: time="2026-04-16T23:57:17.861645692Z" level=info msg="connecting to shim 8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932" address="unix:///run/containerd/s/2de69cc2516af0ffaad97cd847aa9c9b17eefcf437bb2ab023f8da448c74053d" protocol=ttrpc version=3 Apr 16 23:57:17.894506 systemd[1]: Started cri-containerd-8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932.scope - libcontainer container 8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932. Apr 16 23:57:17.940739 containerd[1628]: time="2026-04-16T23:57:17.940650126Z" level=info msg="StartContainer for \"8d961f7fbda334a84150ec5c52426619e848b505bcd78518be1706213fb3c932\" returns successfully" Apr 16 23:57:19.934416 kubelet[2815]: E0416 23:57:19.932142 2815 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-84256b4514.18a6fbae7fa1158f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-84256b4514,UID:d75076f2849ae4ac692cad8d2610b9e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-84256b4514,},FirstTimestamp:2026-04-16 23:57:09.928383887 +0000 UTC m=+179.820828008,LastTimestamp:2026-04-16 23:57:09.928383887 +0000 UTC m=+179.820828008,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-84256b4514,}" Apr 16 23:57:21.216819 systemd[1]: cri-containerd-f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a.scope: Deactivated successfully. Apr 16 23:57:21.218851 systemd[1]: cri-containerd-f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a.scope: Consumed 1.693s CPU time, 22.7M memory peak. Apr 16 23:57:21.220015 containerd[1628]: time="2026-04-16T23:57:21.219276352Z" level=info msg="received container exit event container_id:\"f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a\" id:\"f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a\" pid:2643 exit_status:1 exited_at:{seconds:1776383841 nanos:218643613}" Apr 16 23:57:21.269058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a-rootfs.mount: Deactivated successfully. Apr 16 23:57:21.844447 kubelet[2815]: I0416 23:57:21.844399 2815 scope.go:117] "RemoveContainer" containerID="f712b8cfa5d38e8d1a48c1afda12aa0b3d60708ffe53e8cf8cb4c1c9e420cb0a" Apr 16 23:57:21.850565 containerd[1628]: time="2026-04-16T23:57:21.850513477Z" level=info msg="CreateContainer within sandbox \"484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 23:57:21.863652 containerd[1628]: time="2026-04-16T23:57:21.863612865Z" level=info msg="Container 492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:21.877501 containerd[1628]: time="2026-04-16T23:57:21.877435832Z" level=info msg="CreateContainer within sandbox \"484d37f4a3d12b9a7ebee9e5d1e534848606a43c56e13764a24852d4d5a0267c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30\"" Apr 16 23:57:21.878239 containerd[1628]: time="2026-04-16T23:57:21.878196861Z" level=info msg="StartContainer for \"492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30\"" Apr 16 23:57:21.879951 containerd[1628]: time="2026-04-16T23:57:21.879904888Z" level=info msg="connecting to shim 492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30" address="unix:///run/containerd/s/481b65552310bc0d01ed0b47ec6961d1a3ae9a6aa6358aa13e334c6c8caa26d5" protocol=ttrpc version=3 Apr 16 23:57:21.917539 systemd[1]: Started cri-containerd-492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30.scope - libcontainer container 492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30. Apr 16 23:57:21.981376 containerd[1628]: time="2026-04-16T23:57:21.981345187Z" level=info msg="StartContainer for \"492237fdb30e05d2b6bfed842c613543e7079cbb943f282137777f6372f58b30\" returns successfully" Apr 16 23:57:24.698171 systemd[1]: cri-containerd-21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281.scope: Deactivated successfully. Apr 16 23:57:24.698871 systemd[1]: cri-containerd-21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281.scope: Consumed 239ms CPU time, 38M memory peak, 1M read from disk. Apr 16 23:57:24.701192 containerd[1628]: time="2026-04-16T23:57:24.701099135Z" level=info msg="received container exit event container_id:\"21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281\" id:\"21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281\" pid:6120 exit_status:1 exited_at:{seconds:1776383844 nanos:700760517}" Apr 16 23:57:24.739866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281-rootfs.mount: Deactivated successfully. Apr 16 23:57:24.861201 kubelet[2815]: I0416 23:57:24.861150 2815 scope.go:117] "RemoveContainer" containerID="d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996" Apr 16 23:57:24.862098 kubelet[2815]: I0416 23:57:24.861593 2815 scope.go:117] "RemoveContainer" containerID="21c88fd948d31fe17a3cc091ff6a84cf5fc0aae9410aa2074d762e8a4d5f0281" Apr 16 23:57:24.862098 kubelet[2815]: E0416 23:57:24.861780 2815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-v6tqt_tigera-operator(606c6a73-47f4-404b-bb3f-a0206049e643)\"" pod="tigera-operator/tigera-operator-5588576f44-v6tqt" podUID="606c6a73-47f4-404b-bb3f-a0206049e643" Apr 16 23:57:24.864475 containerd[1628]: time="2026-04-16T23:57:24.864417023Z" level=info msg="RemoveContainer for \"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\"" Apr 16 23:57:24.873533 containerd[1628]: time="2026-04-16T23:57:24.873469039Z" level=info msg="RemoveContainer for \"d6115a1ed7d92e90bf192bc71fc84144cf4af3c3f5b60f2443f6b10354288996\" returns successfully" Apr 16 23:57:26.118695 kubelet[2815]: E0416 23:57:26.117727 2815 controller.go:195] "Failed to update lease" err="Put \"https://77.42.47.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-84256b4514?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 23:57:36.119704 kubelet[2815]: E0416 23:57:36.119612 2815 controller.go:195] "Failed to update lease" err="Put \"https://77.42.47.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-84256b4514?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"