Apr 24 23:31:50.915587 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 24 23:31:50.915619 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 24 23:31:50.915630 kernel: KASLR enabled Apr 24 23:31:50.915636 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 24 23:31:50.915642 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 24 23:31:50.915648 kernel: random: crng init done Apr 24 23:31:50.915655 kernel: ACPI: Early table checksum verification disabled Apr 24 23:31:50.915661 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 24 23:31:50.915667 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 24 23:31:50.916733 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916750 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916756 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916762 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916769 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916776 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916788 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916795 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916802 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:31:50.916809 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 23:31:50.916815 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 24 23:31:50.916821 kernel: NUMA: Failed to initialise from firmware Apr 24 23:31:50.916829 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 24 23:31:50.916835 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 24 23:31:50.916841 kernel: Zone ranges: Apr 24 23:31:50.916848 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 24 23:31:50.916856 kernel: DMA32 empty Apr 24 23:31:50.916863 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 24 23:31:50.916869 kernel: Movable zone start for each node Apr 24 23:31:50.916876 kernel: Early memory node ranges Apr 24 23:31:50.916882 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 24 23:31:50.916889 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 24 23:31:50.916895 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 24 23:31:50.916902 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 24 23:31:50.916908 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 24 23:31:50.916914 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 24 23:31:50.916921 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 24 23:31:50.916927 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 24 23:31:50.916935 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 24 23:31:50.916942 kernel: psci: probing for conduit method from ACPI. Apr 24 23:31:50.916949 kernel: psci: PSCIv1.1 detected in firmware. Apr 24 23:31:50.916958 kernel: psci: Using standard PSCI v0.2 function IDs Apr 24 23:31:50.916965 kernel: psci: Trusted OS migration not required Apr 24 23:31:50.916972 kernel: psci: SMC Calling Convention v1.1 Apr 24 23:31:50.916980 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 24 23:31:50.916987 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 24 23:31:50.916994 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 24 23:31:50.917001 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 24 23:31:50.917008 kernel: Detected PIPT I-cache on CPU0 Apr 24 23:31:50.917015 kernel: CPU features: detected: GIC system register CPU interface Apr 24 23:31:50.917022 kernel: CPU features: detected: Hardware dirty bit management Apr 24 23:31:50.917029 kernel: CPU features: detected: Spectre-v4 Apr 24 23:31:50.917035 kernel: CPU features: detected: Spectre-BHB Apr 24 23:31:50.917042 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 24 23:31:50.917051 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 24 23:31:50.917057 kernel: CPU features: detected: ARM erratum 1418040 Apr 24 23:31:50.917064 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 24 23:31:50.917071 kernel: alternatives: applying boot alternatives Apr 24 23:31:50.917080 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:31:50.917087 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:31:50.917094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:31:50.917101 kernel: Fallback order for Node 0: 0 Apr 24 23:31:50.917108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 24 23:31:50.917115 kernel: Policy zone: Normal Apr 24 23:31:50.917121 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:31:50.917130 kernel: software IO TLB: area num 2. Apr 24 23:31:50.917137 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 24 23:31:50.917144 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 24 23:31:50.917151 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:31:50.917158 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:31:50.917166 kernel: rcu: RCU event tracing is enabled. Apr 24 23:31:50.917173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:31:50.917180 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:31:50.917187 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:31:50.917194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:31:50.917201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:31:50.917208 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 24 23:31:50.917216 kernel: GICv3: 256 SPIs implemented Apr 24 23:31:50.917223 kernel: GICv3: 0 Extended SPIs implemented Apr 24 23:31:50.917230 kernel: Root IRQ handler: gic_handle_irq Apr 24 23:31:50.917237 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 24 23:31:50.917244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 24 23:31:50.917251 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 24 23:31:50.917258 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 24 23:31:50.917265 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 24 23:31:50.917272 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 24 23:31:50.917278 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 24 23:31:50.917285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:31:50.917294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 24 23:31:50.917301 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 24 23:31:50.917308 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 24 23:31:50.917315 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 24 23:31:50.917321 kernel: Console: colour dummy device 80x25 Apr 24 23:31:50.917330 kernel: ACPI: Core revision 20230628 Apr 24 23:31:50.917337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 24 23:31:50.917344 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:31:50.917352 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:31:50.917359 kernel: landlock: Up and running. Apr 24 23:31:50.917367 kernel: SELinux: Initializing. Apr 24 23:31:50.917375 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:31:50.917382 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:31:50.917389 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:31:50.917397 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:31:50.917404 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:31:50.917411 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:31:50.917418 kernel: Platform MSI: ITS@0x8080000 domain created Apr 24 23:31:50.917425 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 24 23:31:50.917434 kernel: Remapping and enabling EFI services. Apr 24 23:31:50.917441 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:31:50.917448 kernel: Detected PIPT I-cache on CPU1 Apr 24 23:31:50.917456 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 24 23:31:50.917463 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 24 23:31:50.917470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 24 23:31:50.917477 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 24 23:31:50.917484 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:31:50.917491 kernel: SMP: Total of 2 processors activated. Apr 24 23:31:50.917498 kernel: CPU features: detected: 32-bit EL0 Support Apr 24 23:31:50.917507 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 24 23:31:50.917515 kernel: CPU features: detected: Common not Private translations Apr 24 23:31:50.917528 kernel: CPU features: detected: CRC32 instructions Apr 24 23:31:50.917552 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 24 23:31:50.917560 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 24 23:31:50.917568 kernel: CPU features: detected: LSE atomic instructions Apr 24 23:31:50.917576 kernel: CPU features: detected: Privileged Access Never Apr 24 23:31:50.917583 kernel: CPU features: detected: RAS Extension Support Apr 24 23:31:50.917594 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 24 23:31:50.917601 kernel: CPU: All CPU(s) started at EL1 Apr 24 23:31:50.917609 kernel: alternatives: applying system-wide alternatives Apr 24 23:31:50.917616 kernel: devtmpfs: initialized Apr 24 23:31:50.917624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:31:50.917632 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:31:50.917639 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:31:50.917647 kernel: SMBIOS 3.0.0 present. Apr 24 23:31:50.917656 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 24 23:31:50.917664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:31:50.917701 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 24 23:31:50.917710 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 24 23:31:50.917718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 24 23:31:50.917725 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:31:50.917733 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Apr 24 23:31:50.917740 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:31:50.917748 kernel: cpuidle: using governor menu Apr 24 23:31:50.917758 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 24 23:31:50.917765 kernel: ASID allocator initialised with 32768 entries Apr 24 23:31:50.917773 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:31:50.917780 kernel: Serial: AMBA PL011 UART driver Apr 24 23:31:50.917788 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 24 23:31:50.917795 kernel: Modules: 0 pages in range for non-PLT usage Apr 24 23:31:50.917803 kernel: Modules: 509008 pages in range for PLT usage Apr 24 23:31:50.917810 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:31:50.917818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:31:50.917827 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 24 23:31:50.917835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 24 23:31:50.917842 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:31:50.917850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:31:50.917857 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 24 23:31:50.917865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 24 23:31:50.917872 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:31:50.917880 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:31:50.917887 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:31:50.917896 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:31:50.917904 kernel: ACPI: Interpreter enabled Apr 24 23:31:50.917911 kernel: ACPI: Using GIC for interrupt routing Apr 24 23:31:50.917919 kernel: ACPI: MCFG table detected, 1 entries Apr 24 23:31:50.917926 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 24 23:31:50.917934 kernel: printk: console [ttyAMA0] enabled Apr 24 23:31:50.917941 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:31:50.918143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:31:50.918227 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 24 23:31:50.918292 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 24 23:31:50.918356 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 24 23:31:50.918422 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 24 23:31:50.918432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 24 23:31:50.918439 kernel: PCI host bridge to bus 0000:00 Apr 24 23:31:50.918516 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 24 23:31:50.918641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 24 23:31:50.920590 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 24 23:31:50.920691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:31:50.920784 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 24 23:31:50.920867 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 24 23:31:50.920937 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 24 23:31:50.921003 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 24 23:31:50.921086 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921154 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 24 23:31:50.921229 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921295 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 24 23:31:50.921372 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921439 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 24 23:31:50.921516 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921644 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 24 23:31:50.921745 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921814 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 24 23:31:50.921887 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.921952 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 24 23:31:50.922029 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.922097 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 24 23:31:50.922170 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.922236 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 24 23:31:50.922308 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:31:50.922374 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 24 23:31:50.922450 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 24 23:31:50.922517 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 24 23:31:50.922618 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:31:50.923083 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 24 23:31:50.923172 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 24 23:31:50.923242 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:31:50.923321 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 24 23:31:50.923398 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 24 23:31:50.923474 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 24 23:31:50.923599 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 24 23:31:50.923763 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 24 23:31:50.923851 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 24 23:31:50.923918 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 24 23:31:50.924001 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 24 23:31:50.924075 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 24 23:31:50.924146 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 24 23:31:50.924222 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 24 23:31:50.924291 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 24 23:31:50.924359 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 24 23:31:50.924439 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:31:50.924508 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 24 23:31:50.924598 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 24 23:31:50.924668 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:31:50.926871 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 24 23:31:50.926946 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 24 23:31:50.927013 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 24 23:31:50.927093 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 24 23:31:50.927159 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 24 23:31:50.927224 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 24 23:31:50.927294 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 24 23:31:50.927364 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 24 23:31:50.927429 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 24 23:31:50.927500 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 24 23:31:50.927593 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 24 23:31:50.927668 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 24 23:31:50.927756 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 24 23:31:50.927824 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 24 23:31:50.927891 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 24 23:31:50.927961 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 24 23:31:50.928025 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 24 23:31:50.928091 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 24 23:31:50.928164 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 24 23:31:50.928231 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 24 23:31:50.928296 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 24 23:31:50.928366 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 24 23:31:50.928432 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 24 23:31:50.928497 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 24 23:31:50.928619 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 24 23:31:50.928766 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 24 23:31:50.928842 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 24 23:31:50.928908 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 24 23:31:50.928972 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:31:50.929037 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 24 23:31:50.929101 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:31:50.929168 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 24 23:31:50.929237 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:31:50.929302 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 24 23:31:50.929368 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:31:50.929436 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 24 23:31:50.929511 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:31:50.929601 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 24 23:31:50.929687 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:31:50.929762 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 24 23:31:50.929831 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:31:50.929899 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 24 23:31:50.929966 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:31:50.930031 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 24 23:31:50.930098 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:31:50.930171 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 24 23:31:50.930240 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 24 23:31:50.930306 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 24 23:31:50.930371 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 24 23:31:50.930437 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 24 23:31:50.930503 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 24 23:31:50.930587 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 24 23:31:50.930655 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 24 23:31:50.932488 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 24 23:31:50.932605 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 24 23:31:50.932690 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 24 23:31:50.932763 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 24 23:31:50.932831 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 24 23:31:50.932897 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 24 23:31:50.932972 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 24 23:31:50.933042 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 24 23:31:50.933111 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 24 23:31:50.933183 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 24 23:31:50.933251 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 24 23:31:50.933316 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 24 23:31:50.933387 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 24 23:31:50.933462 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 24 23:31:50.933530 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 24 23:31:50.933656 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 24 23:31:50.933744 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 24 23:31:50.933818 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 24 23:31:50.933884 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 24 23:31:50.933950 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:31:50.934024 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 24 23:31:50.934095 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 24 23:31:50.934160 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 24 23:31:50.934225 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 24 23:31:50.934289 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:31:50.934364 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 24 23:31:50.934433 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 24 23:31:50.934513 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 24 23:31:50.934634 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 24 23:31:50.934867 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 24 23:31:50.934952 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:31:50.935029 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 24 23:31:50.935098 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 24 23:31:50.935166 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 24 23:31:50.935233 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 24 23:31:50.935299 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:31:50.935374 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 24 23:31:50.935450 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 24 23:31:50.935518 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 24 23:31:50.935663 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 24 23:31:50.935757 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 24 23:31:50.935824 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:31:50.935901 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 24 23:31:50.935970 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 24 23:31:50.936045 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 24 23:31:50.936124 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 24 23:31:50.936192 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 24 23:31:50.936264 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:31:50.936342 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 24 23:31:50.936414 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 24 23:31:50.936486 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 24 23:31:50.936573 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 24 23:31:50.936646 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 24 23:31:50.936749 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 24 23:31:50.936830 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:31:50.936902 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 24 23:31:50.936971 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 24 23:31:50.937037 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 24 23:31:50.937107 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:31:50.937178 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 24 23:31:50.937248 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 24 23:31:50.937318 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 24 23:31:50.937387 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:31:50.937455 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 24 23:31:50.937516 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 24 23:31:50.937594 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 24 23:31:50.937797 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 24 23:31:50.937871 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 24 23:31:50.937938 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:31:50.938007 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 24 23:31:50.939807 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 24 23:31:50.939895 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:31:50.939975 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 24 23:31:50.940037 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 24 23:31:50.940104 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:31:50.940172 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 24 23:31:50.940232 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 24 23:31:50.940308 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:31:50.940376 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 24 23:31:50.940437 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 24 23:31:50.940497 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:31:50.940590 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 24 23:31:50.940655 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 24 23:31:50.941365 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:31:50.941466 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 24 23:31:50.941588 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 24 23:31:50.941664 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:31:50.941814 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 24 23:31:50.941877 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 24 23:31:50.941939 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:31:50.942020 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 24 23:31:50.942082 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 24 23:31:50.942145 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:31:50.942155 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 24 23:31:50.942164 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 24 23:31:50.942172 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 24 23:31:50.942179 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 24 23:31:50.942187 kernel: iommu: Default domain type: Translated Apr 24 23:31:50.942195 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 24 23:31:50.942203 kernel: efivars: Registered efivars operations Apr 24 23:31:50.942211 kernel: vgaarb: loaded Apr 24 23:31:50.942221 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 24 23:31:50.942229 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:31:50.942237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:31:50.942245 kernel: pnp: PnP ACPI init Apr 24 23:31:50.942320 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 24 23:31:50.942331 kernel: pnp: PnP ACPI: found 1 devices Apr 24 23:31:50.942339 kernel: NET: Registered PF_INET protocol family Apr 24 23:31:50.942347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:31:50.942357 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:31:50.942365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:31:50.942374 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:31:50.942382 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:31:50.942389 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:31:50.942397 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:31:50.942405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:31:50.942413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:31:50.942492 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 24 23:31:50.942506 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:31:50.942514 kernel: kvm [1]: HYP mode not available Apr 24 23:31:50.942522 kernel: Initialise system trusted keyrings Apr 24 23:31:50.942530 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:31:50.942554 kernel: Key type asymmetric registered Apr 24 23:31:50.942562 kernel: Asymmetric key parser 'x509' registered Apr 24 23:31:50.942570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 23:31:50.942577 kernel: io scheduler mq-deadline registered Apr 24 23:31:50.942585 kernel: io scheduler kyber registered Apr 24 23:31:50.942596 kernel: io scheduler bfq registered Apr 24 23:31:50.942605 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 24 23:31:50.942704 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 24 23:31:50.942779 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 24 23:31:50.942846 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.942915 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 24 23:31:50.942983 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 24 23:31:50.943064 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.943138 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 24 23:31:50.943206 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 24 23:31:50.943274 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.943346 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 24 23:31:50.943415 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 24 23:31:50.943485 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.943616 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 24 23:31:50.943716 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 24 23:31:50.943787 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.943859 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 24 23:31:50.943926 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 24 23:31:50.943999 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.944069 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 24 23:31:50.944135 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 24 23:31:50.944202 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.944271 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 24 23:31:50.944341 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 24 23:31:50.944407 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.944418 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 24 23:31:50.944487 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 24 23:31:50.944574 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 24 23:31:50.944643 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:31:50.944657 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 24 23:31:50.944667 kernel: ACPI: button: Power Button [PWRB] Apr 24 23:31:50.944694 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 24 23:31:50.944784 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 24 23:31:50.944862 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 24 23:31:50.944873 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:31:50.944881 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 24 23:31:50.944952 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 24 23:31:50.944963 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 24 23:31:50.944971 kernel: thunder_xcv, ver 1.0 Apr 24 23:31:50.944983 kernel: thunder_bgx, ver 1.0 Apr 24 23:31:50.944991 kernel: nicpf, ver 1.0 Apr 24 23:31:50.944999 kernel: nicvf, ver 1.0 Apr 24 23:31:50.945079 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 24 23:31:50.945143 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-24T23:31:50 UTC (1777073510) Apr 24 23:31:50.945154 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:31:50.945162 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 24 23:31:50.945170 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 24 23:31:50.945180 kernel: watchdog: Hard watchdog permanently disabled Apr 24 23:31:50.945188 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:31:50.945195 kernel: Segment Routing with IPv6 Apr 24 23:31:50.945203 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:31:50.945211 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:31:50.945218 kernel: Key type dns_resolver registered Apr 24 23:31:50.945226 kernel: registered taskstats version 1 Apr 24 23:31:50.945234 kernel: Loading compiled-in X.509 certificates Apr 24 23:31:50.945242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 24 23:31:50.945251 kernel: Key type .fscrypt registered Apr 24 23:31:50.945259 kernel: Key type fscrypt-provisioning registered Apr 24 23:31:50.945266 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:31:50.945274 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:31:50.945282 kernel: ima: No architecture policies found Apr 24 23:31:50.945290 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 24 23:31:50.945298 kernel: clk: Disabling unused clocks Apr 24 23:31:50.945306 kernel: Freeing unused kernel memory: 39424K Apr 24 23:31:50.945314 kernel: Run /init as init process Apr 24 23:31:50.945323 kernel: with arguments: Apr 24 23:31:50.945331 kernel: /init Apr 24 23:31:50.945339 kernel: with environment: Apr 24 23:31:50.945346 kernel: HOME=/ Apr 24 23:31:50.945354 kernel: TERM=linux Apr 24 23:31:50.945364 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:31:50.945374 systemd[1]: Detected virtualization kvm. Apr 24 23:31:50.945382 systemd[1]: Detected architecture arm64. Apr 24 23:31:50.945391 systemd[1]: Running in initrd. Apr 24 23:31:50.945399 systemd[1]: No hostname configured, using default hostname. Apr 24 23:31:50.945407 systemd[1]: Hostname set to . Apr 24 23:31:50.945416 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:31:50.945424 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:31:50.945433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:31:50.945441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:31:50.945450 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:31:50.945460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:31:50.945468 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:31:50.945477 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:31:50.945486 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:31:50.945495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:31:50.945503 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:31:50.945512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:31:50.945523 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:31:50.945532 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:31:50.945553 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:31:50.945561 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:31:50.945569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:31:50.945578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:31:50.945586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:31:50.945594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:31:50.945605 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:31:50.945614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:31:50.945623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:31:50.945631 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:31:50.945639 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:31:50.945648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:31:50.945656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:31:50.945664 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:31:50.946335 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:31:50.946363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:31:50.946372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:31:50.946381 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:31:50.946403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:31:50.946412 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:31:50.946452 systemd-journald[237]: Collecting audit messages is disabled. Apr 24 23:31:50.946477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:31:50.946486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:31:50.946496 kernel: Bridge firewalling registered Apr 24 23:31:50.946505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:50.946514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:31:50.946523 systemd-journald[237]: Journal started Apr 24 23:31:50.946555 systemd-journald[237]: Runtime Journal (/run/log/journal/d7627883357644029d8423bd879d57d2) is 8.0M, max 76.6M, 68.6M free. Apr 24 23:31:50.911603 systemd-modules-load[238]: Inserted module 'overlay' Apr 24 23:31:50.950821 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:31:50.938080 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 24 23:31:50.949201 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:31:50.950033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:31:50.951855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:31:50.956926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:31:50.962022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:31:50.981398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:31:50.983525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:31:50.985789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:31:50.991968 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:31:50.993558 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:31:51.007881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:31:51.009555 dracut-cmdline[274]: dracut-dracut-053 Apr 24 23:31:51.011844 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:31:51.050936 systemd-resolved[279]: Positive Trust Anchors: Apr 24 23:31:51.051740 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:31:51.051776 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:31:51.062163 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 24 23:31:51.064320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:31:51.065858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:31:51.090744 kernel: SCSI subsystem initialized Apr 24 23:31:51.094710 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:31:51.103915 kernel: iscsi: registered transport (tcp) Apr 24 23:31:51.117729 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:31:51.117817 kernel: QLogic iSCSI HBA Driver Apr 24 23:31:51.167152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:31:51.173048 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:31:51.195821 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:31:51.195895 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:31:51.195908 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:31:51.250755 kernel: raid6: neonx8 gen() 15313 MB/s Apr 24 23:31:51.264734 kernel: raid6: neonx4 gen() 15234 MB/s Apr 24 23:31:51.281750 kernel: raid6: neonx2 gen() 12912 MB/s Apr 24 23:31:51.298769 kernel: raid6: neonx1 gen() 10230 MB/s Apr 24 23:31:51.315751 kernel: raid6: int64x8 gen() 6801 MB/s Apr 24 23:31:51.332735 kernel: raid6: int64x4 gen() 7164 MB/s Apr 24 23:31:51.349940 kernel: raid6: int64x2 gen() 5975 MB/s Apr 24 23:31:51.366761 kernel: raid6: int64x1 gen() 4958 MB/s Apr 24 23:31:51.366853 kernel: raid6: using algorithm neonx8 gen() 15313 MB/s Apr 24 23:31:51.383736 kernel: raid6: .... xor() 11662 MB/s, rmw enabled Apr 24 23:31:51.383815 kernel: raid6: using neon recovery algorithm Apr 24 23:31:51.389738 kernel: xor: measuring software checksum speed Apr 24 23:31:51.389816 kernel: 8regs : 19769 MB/sec Apr 24 23:31:51.389833 kernel: 32regs : 19646 MB/sec Apr 24 23:31:51.389849 kernel: arm64_neon : 21454 MB/sec Apr 24 23:31:51.390719 kernel: xor: using function: arm64_neon (21454 MB/sec) Apr 24 23:31:51.443741 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:31:51.459185 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:31:51.465923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:31:51.491107 systemd-udevd[459]: Using default interface naming scheme 'v255'. Apr 24 23:31:51.499779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:31:51.509308 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:31:51.525669 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 24 23:31:51.566080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:31:51.574947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:31:51.631784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:31:51.637945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:31:51.667900 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:31:51.670130 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:31:51.671906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:31:51.673946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:31:51.682229 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:31:51.700965 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:31:51.763818 kernel: ACPI: bus type USB registered Apr 24 23:31:51.763918 kernel: scsi host0: Virtio SCSI HBA Apr 24 23:31:51.774064 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:31:51.774216 kernel: usbcore: registered new interface driver usbfs Apr 24 23:31:51.774237 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 23:31:51.774263 kernel: usbcore: registered new interface driver hub Apr 24 23:31:51.778734 kernel: usbcore: registered new device driver usb Apr 24 23:31:51.779852 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:31:51.779991 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:31:51.781506 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:31:51.782477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:31:51.782726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:51.785495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:31:51.795460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:31:51.813746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:51.819910 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:31:51.835048 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 24 23:31:51.837092 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 24 23:31:51.837297 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:31:51.840702 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:31:51.847823 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:31:51.848033 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 24 23:31:51.851780 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 24 23:31:51.852071 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 24 23:31:51.853900 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 24 23:31:51.854091 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 24 23:31:51.854188 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 24 23:31:51.856061 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 23:31:51.856409 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:31:51.857727 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 24 23:31:51.857942 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 24 23:31:51.859297 kernel: hub 1-0:1.0: USB hub found Apr 24 23:31:51.860132 kernel: hub 1-0:1.0: 4 ports detected Apr 24 23:31:51.862369 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:31:51.862407 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 24 23:31:51.862604 kernel: GPT:17805311 != 80003071 Apr 24 23:31:51.862617 kernel: hub 2-0:1.0: USB hub found Apr 24 23:31:51.862753 kernel: hub 2-0:1.0: 4 ports detected Apr 24 23:31:51.864956 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:31:51.865014 kernel: GPT:17805311 != 80003071 Apr 24 23:31:51.865025 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:31:51.865034 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:31:51.866187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:31:51.870775 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 24 23:31:51.905549 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (508) Apr 24 23:31:51.920739 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (520) Apr 24 23:31:51.918997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:31:51.936170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 23:31:51.944331 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 23:31:51.950268 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 23:31:51.951358 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 23:31:51.957951 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:31:51.968459 disk-uuid[575]: Primary Header is updated. Apr 24 23:31:51.968459 disk-uuid[575]: Secondary Entries is updated. Apr 24 23:31:51.968459 disk-uuid[575]: Secondary Header is updated. Apr 24 23:31:51.976861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:31:51.980740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:31:51.984732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:31:52.105694 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 24 23:31:52.242425 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 24 23:31:52.242493 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 24 23:31:52.243718 kernel: usbcore: registered new interface driver usbhid Apr 24 23:31:52.243755 kernel: usbhid: USB HID core driver Apr 24 23:31:52.349783 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 24 23:31:52.478736 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 24 23:31:52.534752 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 24 23:31:52.992576 disk-uuid[576]: The operation has completed successfully. Apr 24 23:31:52.993294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:31:53.057192 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:31:53.058108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:31:53.067968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:31:53.084721 sh[594]: Success Apr 24 23:31:53.099724 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 24 23:31:53.180466 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:31:53.182839 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:31:53.185024 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:31:53.206144 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 24 23:31:53.206240 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:31:53.206261 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:31:53.206279 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:31:53.206983 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:31:53.213724 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:31:53.216330 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:31:53.217743 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:31:53.226962 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:31:53.231002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:31:53.247752 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:31:53.247812 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:31:53.248774 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:31:53.256733 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:31:53.257076 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:31:53.270811 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:31:53.270297 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:31:53.277302 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:31:53.287035 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:31:53.369599 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:31:53.379847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:31:53.399636 systemd-networkd[780]: lo: Link UP Apr 24 23:31:53.399650 systemd-networkd[780]: lo: Gained carrier Apr 24 23:31:53.401213 systemd-networkd[780]: Enumeration completed Apr 24 23:31:53.401741 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:31:53.403147 ignition[687]: Ignition 2.19.0 Apr 24 23:31:53.402233 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:53.403154 ignition[687]: Stage: fetch-offline Apr 24 23:31:53.402236 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:31:53.403195 ignition[687]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:53.404223 systemd[1]: Reached target network.target - Network. Apr 24 23:31:53.403203 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:53.405242 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:53.403619 ignition[687]: parsed url from cmdline: "" Apr 24 23:31:53.405245 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:31:53.403624 ignition[687]: no config URL provided Apr 24 23:31:53.406911 systemd-networkd[780]: eth0: Link UP Apr 24 23:31:53.403630 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:31:53.406915 systemd-networkd[780]: eth0: Gained carrier Apr 24 23:31:53.403644 ignition[687]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:31:53.406925 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:53.403651 ignition[687]: failed to fetch config: resource requires networking Apr 24 23:31:53.407986 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:31:53.405058 ignition[687]: Ignition finished successfully Apr 24 23:31:53.410412 systemd-networkd[780]: eth1: Link UP Apr 24 23:31:53.410415 systemd-networkd[780]: eth1: Gained carrier Apr 24 23:31:53.410428 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:53.416939 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:31:53.433566 ignition[783]: Ignition 2.19.0 Apr 24 23:31:53.433578 ignition[783]: Stage: fetch Apr 24 23:31:53.434401 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:53.434412 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:53.434564 ignition[783]: parsed url from cmdline: "" Apr 24 23:31:53.434568 ignition[783]: no config URL provided Apr 24 23:31:53.434573 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:31:53.434588 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:31:53.434609 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 24 23:31:53.435198 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:31:53.458795 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:31:53.465833 systemd-networkd[780]: eth0: DHCPv4 address 91.99.220.32/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:31:53.635319 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 24 23:31:53.640130 ignition[783]: GET result: OK Apr 24 23:31:53.640231 ignition[783]: parsing config with SHA512: b04042446158566764b2b20c7457aeab69db09a7499794dad1e55b7dc069898753b50e7e6684978112b49e0719ce65ac7580e72772de22cb8351084ae2e9c7f6 Apr 24 23:31:53.646016 unknown[783]: fetched base config from "system" Apr 24 23:31:53.646481 ignition[783]: fetch: fetch complete Apr 24 23:31:53.646026 unknown[783]: fetched base config from "system" Apr 24 23:31:53.646577 ignition[783]: fetch: fetch passed Apr 24 23:31:53.646031 unknown[783]: fetched user config from "hetzner" Apr 24 23:31:53.646637 ignition[783]: Ignition finished successfully Apr 24 23:31:53.649539 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:31:53.657898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:31:53.674023 ignition[790]: Ignition 2.19.0 Apr 24 23:31:53.674034 ignition[790]: Stage: kargs Apr 24 23:31:53.674219 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:53.674230 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:53.675509 ignition[790]: kargs: kargs passed Apr 24 23:31:53.675575 ignition[790]: Ignition finished successfully Apr 24 23:31:53.678165 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:31:53.682942 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:31:53.698616 ignition[796]: Ignition 2.19.0 Apr 24 23:31:53.698633 ignition[796]: Stage: disks Apr 24 23:31:53.698852 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:53.698861 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:53.702083 ignition[796]: disks: disks passed Apr 24 23:31:53.702585 ignition[796]: Ignition finished successfully Apr 24 23:31:53.705474 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:31:53.706699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:31:53.707622 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:31:53.709123 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:31:53.710407 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:31:53.711031 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:31:53.724004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:31:53.745807 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 24 23:31:53.752038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:31:53.760202 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:31:53.822991 kernel: EXT4-fs (sda9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 24 23:31:53.824610 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:31:53.826270 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:31:53.842926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:31:53.846882 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:31:53.850076 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:31:53.850777 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:31:53.850817 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:31:53.860701 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Apr 24 23:31:53.863118 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:31:53.863178 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:31:53.863190 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:31:53.867885 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:31:53.870759 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:31:53.870785 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:31:53.871801 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:31:53.878247 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:31:53.927838 coreos-metadata[815]: Apr 24 23:31:53.927 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 24 23:31:53.929872 coreos-metadata[815]: Apr 24 23:31:53.928 INFO Fetch successful Apr 24 23:31:53.929872 coreos-metadata[815]: Apr 24 23:31:53.929 INFO wrote hostname ci-4081-3-6-n-4ca6954963 to /sysroot/etc/hostname Apr 24 23:31:53.933541 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:31:53.933261 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:31:53.941055 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:31:53.947819 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:31:53.953296 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:31:54.073072 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:31:54.079877 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:31:54.082957 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:31:54.092722 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:31:54.117251 ignition[930]: INFO : Ignition 2.19.0 Apr 24 23:31:54.117251 ignition[930]: INFO : Stage: mount Apr 24 23:31:54.117251 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:54.117251 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:54.121075 ignition[930]: INFO : mount: mount passed Apr 24 23:31:54.121075 ignition[930]: INFO : Ignition finished successfully Apr 24 23:31:54.122793 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:31:54.127858 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:31:54.128778 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:31:54.206148 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:31:54.215059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:31:54.224302 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Apr 24 23:31:54.224404 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:31:54.224431 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:31:54.225125 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:31:54.228724 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:31:54.228805 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:31:54.232086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:31:54.268002 ignition[957]: INFO : Ignition 2.19.0 Apr 24 23:31:54.268002 ignition[957]: INFO : Stage: files Apr 24 23:31:54.270148 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:54.270148 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:54.270148 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:31:54.273223 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:31:54.273223 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:31:54.275703 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:31:54.276950 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:31:54.277942 unknown[957]: wrote ssh authorized keys file for user: core Apr 24 23:31:54.279008 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:31:54.282736 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:31:54.282736 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:31:54.282736 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:31:54.282736 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 24 23:31:54.337549 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:31:54.435908 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:31:54.435908 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:31:54.439651 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 24 23:31:54.460852 systemd-networkd[780]: eth0: Gained IPv6LL Apr 24 23:31:54.903941 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:31:55.420846 systemd-networkd[780]: eth1: Gained IPv6LL Apr 24 23:31:56.791539 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:31:56.791539 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:31:56.794355 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:31:56.794355 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:31:56.794355 ignition[957]: INFO : files: files passed Apr 24 23:31:56.815446 ignition[957]: INFO : Ignition finished successfully Apr 24 23:31:56.799704 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:31:56.805852 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:31:56.813112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:31:56.816808 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:31:56.816900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:31:56.826524 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:31:56.826524 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:31:56.828922 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:31:56.831336 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:31:56.833608 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:31:56.838917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:31:56.869808 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:31:56.870896 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:31:56.873327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:31:56.874083 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:31:56.876890 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:31:56.882870 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:31:56.896764 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:31:56.904951 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:31:56.919717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:31:56.921294 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:31:56.923195 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:31:56.924216 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:31:56.924407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:31:56.925847 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:31:56.927007 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:31:56.927836 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:31:56.928762 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:31:56.929861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:31:56.930939 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:31:56.931899 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:31:56.932927 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:31:56.933978 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:31:56.934877 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:31:56.935685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:31:56.935849 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:31:56.937102 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:31:56.938244 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:31:56.939327 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:31:56.939804 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:31:56.940564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:31:56.940769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:31:56.942296 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:31:56.942494 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:31:56.943621 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:31:56.943777 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:31:56.944646 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:31:56.944797 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:31:56.960848 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:31:56.962251 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:31:56.962666 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:31:56.965540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:31:56.967759 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:31:56.967890 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:31:56.970940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:31:56.971089 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:31:56.984395 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:31:56.984566 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:31:56.988566 ignition[1011]: INFO : Ignition 2.19.0 Apr 24 23:31:56.988566 ignition[1011]: INFO : Stage: umount Apr 24 23:31:56.988566 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:31:56.988566 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:31:56.988566 ignition[1011]: INFO : umount: umount passed Apr 24 23:31:56.988566 ignition[1011]: INFO : Ignition finished successfully Apr 24 23:31:56.991899 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:31:56.992036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:31:56.995945 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:31:56.996004 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:31:57.002176 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:31:57.002236 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:31:57.004955 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:31:57.005012 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:31:57.005706 systemd[1]: Stopped target network.target - Network. Apr 24 23:31:57.007107 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:31:57.007160 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:31:57.008344 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:31:57.009071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:31:57.013207 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:31:57.014284 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:31:57.019837 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:31:57.023848 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:31:57.023897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:31:57.025133 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:31:57.025180 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:31:57.026165 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:31:57.026215 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:31:57.027444 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:31:57.027487 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:31:57.028570 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:31:57.029515 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:31:57.032537 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:31:57.033163 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:31:57.033241 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:31:57.034851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:31:57.034929 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:31:57.039927 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 24 23:31:57.040057 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:31:57.040201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:31:57.042176 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:31:57.042235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:31:57.044174 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 24 23:31:57.045759 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:31:57.046485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:31:57.048398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:31:57.048558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:31:57.058005 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:31:57.058586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:31:57.058648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:31:57.062172 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:31:57.062219 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:31:57.062871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:31:57.062917 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:31:57.064198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:31:57.079479 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:31:57.080719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:31:57.083343 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:31:57.083541 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:31:57.084877 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:31:57.084916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:31:57.085999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:31:57.086038 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:31:57.087259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:31:57.087310 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:31:57.088991 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:31:57.089041 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:31:57.090622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:31:57.090685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:31:57.101894 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:31:57.103500 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:31:57.103625 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:31:57.108863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:31:57.108923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:57.111930 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:31:57.113506 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:31:57.115606 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:31:57.121928 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:31:57.131958 systemd[1]: Switching root. Apr 24 23:31:57.163667 systemd-journald[237]: Journal stopped Apr 24 23:31:58.046022 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 24 23:31:58.046094 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:31:58.046108 kernel: SELinux: policy capability open_perms=1 Apr 24 23:31:58.046119 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:31:58.046130 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:31:58.046144 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:31:58.046154 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:31:58.046167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:31:58.046178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:31:58.046193 systemd[1]: Successfully loaded SELinux policy in 34.482ms. Apr 24 23:31:58.046214 kernel: audit: type=1403 audit(1777073517.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:31:58.046225 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.776ms. Apr 24 23:31:58.046237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:31:58.046248 systemd[1]: Detected virtualization kvm. Apr 24 23:31:58.046264 systemd[1]: Detected architecture arm64. Apr 24 23:31:58.046275 systemd[1]: Detected first boot. Apr 24 23:31:58.046288 systemd[1]: Hostname set to . Apr 24 23:31:58.046299 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:31:58.046310 zram_generator::config[1071]: No configuration found. Apr 24 23:31:58.046321 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:31:58.046332 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:31:58.046343 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:31:58.046355 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:31:58.046366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:31:58.046377 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:31:58.046390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:31:58.046410 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:31:58.046424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:31:58.046436 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:31:58.046447 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:31:58.046458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:31:58.046469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:31:58.046480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:31:58.046491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:31:58.046505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:31:58.046516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:31:58.046527 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 24 23:31:58.046538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:31:58.046549 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:31:58.046560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:31:58.046575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:31:58.046587 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:31:58.046598 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:31:58.046609 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:31:58.046620 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:31:58.046631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:31:58.046642 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:31:58.046653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:31:58.046664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:31:58.048003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:31:58.048028 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:31:58.048068 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:31:58.048081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:31:58.048091 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:31:58.048102 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:31:58.048112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:31:58.048122 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:31:58.048132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:31:58.048146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:31:58.048161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:31:58.048174 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:31:58.048184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:31:58.048195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:31:58.048206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:31:58.048219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:31:58.048235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:31:58.048248 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:31:58.048261 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:31:58.048272 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:31:58.048286 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:31:58.048297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:31:58.048308 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:31:58.048321 kernel: ACPI: bus type drm_connector registered Apr 24 23:31:58.048332 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:31:58.048342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:31:58.048354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:31:58.048364 kernel: fuse: init (API version 7.39) Apr 24 23:31:58.048374 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:31:58.048384 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:31:58.048395 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:31:58.048419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:31:58.048458 systemd-journald[1167]: Collecting audit messages is disabled. Apr 24 23:31:58.048481 kernel: loop: module loaded Apr 24 23:31:58.048492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:31:58.048502 systemd-journald[1167]: Journal started Apr 24 23:31:58.048524 systemd-journald[1167]: Runtime Journal (/run/log/journal/d7627883357644029d8423bd879d57d2) is 8.0M, max 76.6M, 68.6M free. Apr 24 23:31:58.052085 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:31:58.053752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:31:58.054750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:31:58.055739 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:31:58.055892 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:31:58.056907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:31:58.057057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:31:58.058278 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:31:58.058461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:31:58.059656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:31:58.059956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:31:58.061054 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:31:58.061208 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:31:58.062099 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:31:58.062330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:31:58.063348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:31:58.064428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:31:58.065905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:31:58.082983 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:31:58.091846 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:31:58.096249 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:31:58.097686 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:31:58.109968 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:31:58.116492 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:31:58.118045 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:31:58.129860 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:31:58.131561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:31:58.139729 systemd-journald[1167]: Time spent on flushing to /var/log/journal/d7627883357644029d8423bd879d57d2 is 34.685ms for 1108 entries. Apr 24 23:31:58.139729 systemd-journald[1167]: System Journal (/var/log/journal/d7627883357644029d8423bd879d57d2) is 8.0M, max 584.8M, 576.8M free. Apr 24 23:31:58.186309 systemd-journald[1167]: Received client request to flush runtime journal. Apr 24 23:31:58.141886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:31:58.146969 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:31:58.153117 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:31:58.155017 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:31:58.164547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:31:58.167144 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:31:58.172014 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:31:58.176365 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:31:58.191592 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:31:58.216944 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:31:58.219124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:31:58.226010 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Apr 24 23:31:58.226033 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Apr 24 23:31:58.231655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:31:58.236897 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:31:58.269057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:31:58.274957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:31:58.297209 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 24 23:31:58.297230 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 24 23:31:58.303231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:31:58.644740 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:31:58.654918 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:31:58.679338 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Apr 24 23:31:58.701332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:31:58.725136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:31:58.735869 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:31:58.802994 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 24 23:31:58.812004 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:31:58.891817 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:31:58.902503 systemd-networkd[1246]: lo: Link UP Apr 24 23:31:58.902516 systemd-networkd[1246]: lo: Gained carrier Apr 24 23:31:58.904269 systemd-networkd[1246]: Enumeration completed Apr 24 23:31:58.904417 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:31:58.909095 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:58.909108 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:31:58.911153 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:31:58.912358 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:58.912368 systemd-networkd[1246]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:31:58.914002 systemd-networkd[1246]: eth0: Link UP Apr 24 23:31:58.914012 systemd-networkd[1246]: eth0: Gained carrier Apr 24 23:31:58.914026 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:58.948701 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1238) Apr 24 23:31:58.970097 systemd-networkd[1246]: eth1: Link UP Apr 24 23:31:58.970110 systemd-networkd[1246]: eth1: Gained carrier Apr 24 23:31:58.970128 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:58.977011 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:31:58.988117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:31:59.001137 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:31:59.004949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:31:59.012488 systemd-networkd[1246]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:31:59.015873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:31:59.016842 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:31:59.016887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:31:59.017236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:31:59.017409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:31:59.023094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:31:59.023291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:31:59.027741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:31:59.033066 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:31:59.033952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:31:59.036898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:31:59.039747 systemd-networkd[1246]: eth0: DHCPv4 address 91.99.220.32/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:31:59.052726 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 24 23:31:59.053732 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 24 23:31:59.053771 kernel: [drm] features: -context_init Apr 24 23:31:59.054700 kernel: [drm] number of scanouts: 1 Apr 24 23:31:59.054739 kernel: [drm] number of cap sets: 0 Apr 24 23:31:59.055841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:31:59.059693 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 24 23:31:59.067762 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 23:31:59.069605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:31:59.074710 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 24 23:31:59.080006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:31:59.080240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:59.094076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:31:59.160824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:31:59.218311 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:31:59.227095 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:31:59.243697 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:31:59.268195 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:31:59.270530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:31:59.277958 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:31:59.282021 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:31:59.308461 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:31:59.310851 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:31:59.312734 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:31:59.312875 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:31:59.313519 systemd[1]: Reached target machines.target - Containers. Apr 24 23:31:59.315896 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:31:59.321922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:31:59.326971 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:31:59.330995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:31:59.334908 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:31:59.340281 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:31:59.344900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:31:59.349639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:31:59.366010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:31:59.377745 kernel: loop0: detected capacity change from 0 to 114432 Apr 24 23:31:59.388911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:31:59.390009 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:31:59.408097 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:31:59.425750 kernel: loop1: detected capacity change from 0 to 114328 Apr 24 23:31:59.454718 kernel: loop2: detected capacity change from 0 to 209336 Apr 24 23:31:59.488956 kernel: loop3: detected capacity change from 0 to 8 Apr 24 23:31:59.513944 kernel: loop4: detected capacity change from 0 to 114432 Apr 24 23:31:59.527736 kernel: loop5: detected capacity change from 0 to 114328 Apr 24 23:31:59.538822 kernel: loop6: detected capacity change from 0 to 209336 Apr 24 23:31:59.556032 kernel: loop7: detected capacity change from 0 to 8 Apr 24 23:31:59.557560 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 24 23:31:59.558485 (sd-merge)[1331]: Merged extensions into '/usr'. Apr 24 23:31:59.577202 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:31:59.577220 systemd[1]: Reloading... Apr 24 23:31:59.654772 zram_generator::config[1362]: No configuration found. Apr 24 23:31:59.770164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:31:59.773319 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:31:59.830931 systemd[1]: Reloading finished in 252 ms. Apr 24 23:31:59.848635 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:31:59.852310 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:31:59.860905 systemd[1]: Starting ensure-sysext.service... Apr 24 23:31:59.865000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:31:59.876886 systemd[1]: Reloading requested from client PID 1403 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:31:59.877112 systemd[1]: Reloading... Apr 24 23:31:59.907181 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:31:59.907772 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:31:59.909851 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:31:59.910534 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Apr 24 23:31:59.910818 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Apr 24 23:31:59.915219 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:31:59.915339 systemd-tmpfiles[1404]: Skipping /boot Apr 24 23:31:59.926710 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:31:59.926858 systemd-tmpfiles[1404]: Skipping /boot Apr 24 23:31:59.960724 zram_generator::config[1432]: No configuration found. Apr 24 23:32:00.073236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:32:00.137255 systemd[1]: Reloading finished in 259 ms. Apr 24 23:32:00.158043 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:32:00.176952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:32:00.185853 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:32:00.189824 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:32:00.206933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:32:00.217273 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:32:00.228996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:32:00.230296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:32:00.235652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:32:00.250027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:32:00.251238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:32:00.262505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:32:00.263842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:32:00.264271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:32:00.264507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:32:00.273029 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:32:00.275953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:32:00.276120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:32:00.280524 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:32:00.280705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:32:00.287301 augenrules[1503]: No rules Apr 24 23:32:00.286537 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:32:00.289222 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:32:00.300942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:32:00.301161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:32:00.306990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:32:00.317328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:32:00.325093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:32:00.336995 systemd-resolved[1485]: Positive Trust Anchors: Apr 24 23:32:00.337015 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:32:00.337048 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:32:00.340061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:32:00.347036 systemd-resolved[1485]: Using system hostname 'ci-4081-3-6-n-4ca6954963'. Apr 24 23:32:00.347878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:32:00.350782 systemd-networkd[1246]: eth0: Gained IPv6LL Apr 24 23:32:00.351181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:32:00.354956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:32:00.359250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:32:00.363429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:32:00.367489 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:32:00.369770 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:32:00.372205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:32:00.372399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:32:00.374243 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:32:00.374456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:32:00.375913 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:32:00.376170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:32:00.377480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:32:00.377955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:32:00.383513 systemd[1]: Finished ensure-sysext.service. Apr 24 23:32:00.388485 systemd[1]: Reached target network.target - Network. Apr 24 23:32:00.389295 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:32:00.390086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:32:00.390935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:32:00.391079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:32:00.396971 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:32:00.398954 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:32:00.448482 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:32:00.450914 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:32:00.451999 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:32:00.452883 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:32:00.453694 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:32:00.454532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:32:00.454641 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:32:00.455308 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:32:00.456280 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:32:00.457174 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:32:00.458092 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:32:00.459953 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:32:00.462245 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:32:00.464506 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:32:00.467223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:32:00.468002 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:32:00.468583 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:32:00.469438 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:32:00.469490 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:32:00.469515 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:32:00.477880 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:32:00.493090 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:32:00.497745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:32:00.501927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:32:00.508900 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:32:00.509610 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:32:00.514821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:00.524107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:32:00.530714 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:32:00.531614 systemd-timesyncd[1541]: Contacted time server 94.130.184.193:123 (0.flatcar.pool.ntp.org). Apr 24 23:32:00.535100 jq[1551]: false Apr 24 23:32:00.534762 systemd-timesyncd[1541]: Initial clock synchronization to Fri 2026-04-24 23:32:00.328228 UTC. Apr 24 23:32:00.538165 coreos-metadata[1547]: Apr 24 23:32:00.536 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 24 23:32:00.544133 coreos-metadata[1547]: Apr 24 23:32:00.540 INFO Fetch successful Apr 24 23:32:00.544133 coreos-metadata[1547]: Apr 24 23:32:00.540 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 24 23:32:00.540819 systemd-networkd[1246]: eth1: Gained IPv6LL Apr 24 23:32:00.543850 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:32:00.549696 coreos-metadata[1547]: Apr 24 23:32:00.548 INFO Fetch successful Apr 24 23:32:00.553518 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 24 23:32:00.563293 extend-filesystems[1552]: Found loop4 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found loop5 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found loop6 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found loop7 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda1 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda2 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda3 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found usr Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda4 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda6 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda7 Apr 24 23:32:00.563293 extend-filesystems[1552]: Found sda9 Apr 24 23:32:00.563293 extend-filesystems[1552]: Checking size of /dev/sda9 Apr 24 23:32:00.566951 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:32:00.563957 dbus-daemon[1548]: [system] SELinux support is enabled Apr 24 23:32:00.573333 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:32:00.587843 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:32:00.589425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:32:00.596217 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:32:00.612855 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:32:00.614283 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:32:00.620854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:32:00.621118 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:32:00.643120 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:32:00.643405 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:32:00.647183 extend-filesystems[1552]: Resized partition /dev/sda9 Apr 24 23:32:00.651607 jq[1579]: true Apr 24 23:32:00.672890 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:32:00.678341 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 24 23:32:00.673647 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:32:00.673716 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:32:00.680271 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:32:00.680306 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:32:00.682292 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:32:00.682591 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:32:00.684998 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:32:00.702121 (ntainerd)[1609]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:32:00.716165 tar[1589]: linux-arm64/LICENSE Apr 24 23:32:00.716165 tar[1589]: linux-arm64/helm Apr 24 23:32:00.728321 update_engine[1573]: I20260424 23:32:00.727447 1573 main.cc:92] Flatcar Update Engine starting Apr 24 23:32:00.728627 jq[1600]: true Apr 24 23:32:00.740644 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:32:00.745903 update_engine[1573]: I20260424 23:32:00.742172 1573 update_check_scheduler.cc:74] Next update check in 10m30s Apr 24 23:32:00.744167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:32:00.757899 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:32:00.812396 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:32:00.813642 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:32:00.878728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1239) Apr 24 23:32:00.897779 systemd-logind[1569]: New seat seat0. Apr 24 23:32:00.906596 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (Power Button) Apr 24 23:32:00.926090 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 24 23:32:00.906619 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 24 23:32:00.906919 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:32:00.930319 extend-filesystems[1596]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 23:32:00.930319 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 24 23:32:00.930319 extend-filesystems[1596]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 24 23:32:00.943933 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:32:00.941091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:32:00.965692 extend-filesystems[1552]: Resized filesystem in /dev/sda9 Apr 24 23:32:00.965692 extend-filesystems[1552]: Found sr0 Apr 24 23:32:00.971730 systemd[1]: Starting sshkeys.service... Apr 24 23:32:00.973061 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:32:00.973300 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:32:00.993704 containerd[1609]: time="2026-04-24T23:32:00.992708320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:32:01.005299 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:32:01.011942 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:32:01.056396 containerd[1609]: time="2026-04-24T23:32:01.056337126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.067813969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.067858722Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.067877084Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068112389Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068132348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068194410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068206105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068448310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068466671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068486670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069105 containerd[1609]: time="2026-04-24T23:32:01.068497040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.068588066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.068803256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.068940868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.068956695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.069025618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:32:01.069380 containerd[1609]: time="2026-04-24T23:32:01.069061873Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:32:01.072960 coreos-metadata[1654]: Apr 24 23:32:01.072 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 24 23:32:01.076790 coreos-metadata[1654]: Apr 24 23:32:01.076 INFO Fetch successful Apr 24 23:32:01.081032 unknown[1654]: wrote ssh authorized keys file for user: core Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085227856Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085307187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085324145Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085339934Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085580696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.085791052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086228174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086440362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086460400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086483322Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086497512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086519304Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086533611Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086632 containerd[1609]: time="2026-04-24T23:32:01.086547450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086955 containerd[1609]: time="2026-04-24T23:32:01.086567994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086955 containerd[1609]: time="2026-04-24T23:32:01.086580274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086955 containerd[1609]: time="2026-04-24T23:32:01.086597310Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.086955 containerd[1609]: time="2026-04-24T23:32:01.086609395Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:32:01.093514 containerd[1609]: time="2026-04-24T23:32:01.093474598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093632053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093653221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093767092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093792704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093809038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093833364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093848178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093861432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093884315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093896829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093936709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093952264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.093968637Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.094002514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096031 containerd[1609]: time="2026-04-24T23:32:01.094017327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094029061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094152133Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094170884Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094190999Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094203474Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094213649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094225266Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094236220Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:32:01.096400 containerd[1609]: time="2026-04-24T23:32:01.094246590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:32:01.100427 containerd[1609]: time="2026-04-24T23:32:01.096957703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:32:01.100427 containerd[1609]: time="2026-04-24T23:32:01.097828089Z" level=info msg="Connect containerd service" Apr 24 23:32:01.100427 containerd[1609]: time="2026-04-24T23:32:01.097899351Z" level=info msg="using legacy CRI server" Apr 24 23:32:01.100427 containerd[1609]: time="2026-04-24T23:32:01.097910267Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:32:01.100864 containerd[1609]: time="2026-04-24T23:32:01.100828811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:32:01.101696 containerd[1609]: time="2026-04-24T23:32:01.101667153Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102012041Z" level=info msg="Start subscribing containerd event" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102080574Z" level=info msg="Start recovering state" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102156124Z" level=info msg="Start event monitor" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102167663Z" level=info msg="Start snapshots syncer" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102177799Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:32:01.102615 containerd[1609]: time="2026-04-24T23:32:01.102185635Z" level=info msg="Start streaming server" Apr 24 23:32:01.103021 containerd[1609]: time="2026-04-24T23:32:01.103001912Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:32:01.106486 containerd[1609]: time="2026-04-24T23:32:01.106457143Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:32:01.107688 containerd[1609]: time="2026-04-24T23:32:01.106651983Z" level=info msg="containerd successfully booted in 0.115095s" Apr 24 23:32:01.141400 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:32:01.165349 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:32:01.169922 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:32:01.177451 systemd[1]: Finished sshkeys.service. Apr 24 23:32:01.228082 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:32:01.413743 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:32:01.441208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:32:01.454004 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:32:01.467796 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:32:01.468428 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:32:01.481047 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:32:01.502564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:32:01.511930 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:32:01.526129 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 24 23:32:01.527137 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:32:01.542565 tar[1589]: linux-arm64/README.md Apr 24 23:32:01.566406 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:32:01.897881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:01.899632 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:32:01.901397 systemd[1]: Startup finished in 7.472s (kernel) + 4.608s (userspace) = 12.080s. Apr 24 23:32:01.911863 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:32:02.441727 kubelet[1707]: E0424 23:32:02.441650 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:32:02.446994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:32:02.447321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:32:12.698237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:32:12.709047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:12.855904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:12.869424 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:32:12.918518 kubelet[1732]: E0424 23:32:12.918438 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:32:12.923718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:32:12.923988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:32:23.175114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:32:23.185024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:23.324946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:23.331192 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:32:23.389296 kubelet[1752]: E0424 23:32:23.389238 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:32:23.393944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:32:23.394116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:32:31.838134 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:32:31.851473 systemd[1]: Started sshd@0-91.99.220.32:22-50.85.169.122:53336.service - OpenSSH per-connection server daemon (50.85.169.122:53336). Apr 24 23:32:31.982632 sshd[1761]: Accepted publickey for core from 50.85.169.122 port 53336 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:31.985703 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:31.997623 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:32:32.005161 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:32:32.009046 systemd-logind[1569]: New session 1 of user core. Apr 24 23:32:32.020059 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:32:32.032237 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:32:32.036200 (systemd)[1767]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:32:32.154873 systemd[1767]: Queued start job for default target default.target. Apr 24 23:32:32.155289 systemd[1767]: Created slice app.slice - User Application Slice. Apr 24 23:32:32.155307 systemd[1767]: Reached target paths.target - Paths. Apr 24 23:32:32.155318 systemd[1767]: Reached target timers.target - Timers. Apr 24 23:32:32.160849 systemd[1767]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:32:32.171946 systemd[1767]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:32:32.172025 systemd[1767]: Reached target sockets.target - Sockets. Apr 24 23:32:32.172042 systemd[1767]: Reached target basic.target - Basic System. Apr 24 23:32:32.172108 systemd[1767]: Reached target default.target - Main User Target. Apr 24 23:32:32.172142 systemd[1767]: Startup finished in 128ms. Apr 24 23:32:32.172224 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:32:32.175182 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:32:32.291122 systemd[1]: Started sshd@1-91.99.220.32:22-50.85.169.122:53352.service - OpenSSH per-connection server daemon (50.85.169.122:53352). Apr 24 23:32:32.412356 sshd[1779]: Accepted publickey for core from 50.85.169.122 port 53352 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:32.415321 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:32.421102 systemd-logind[1569]: New session 2 of user core. Apr 24 23:32:32.427212 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:32:32.531913 sshd[1779]: pam_unix(sshd:session): session closed for user core Apr 24 23:32:32.537458 systemd[1]: sshd@1-91.99.220.32:22-50.85.169.122:53352.service: Deactivated successfully. Apr 24 23:32:32.542380 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:32:32.544509 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:32:32.554134 systemd[1]: Started sshd@2-91.99.220.32:22-50.85.169.122:53364.service - OpenSSH per-connection server daemon (50.85.169.122:53364). Apr 24 23:32:32.556165 systemd-logind[1569]: Removed session 2. Apr 24 23:32:32.681849 sshd[1787]: Accepted publickey for core from 50.85.169.122 port 53364 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:32.683509 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:32.688344 systemd-logind[1569]: New session 3 of user core. Apr 24 23:32:32.699778 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:32:32.797035 sshd[1787]: pam_unix(sshd:session): session closed for user core Apr 24 23:32:32.800729 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:32:32.801299 systemd[1]: sshd@2-91.99.220.32:22-50.85.169.122:53364.service: Deactivated successfully. Apr 24 23:32:32.805556 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:32:32.807185 systemd-logind[1569]: Removed session 3. Apr 24 23:32:32.825401 systemd[1]: Started sshd@3-91.99.220.32:22-50.85.169.122:53372.service - OpenSSH per-connection server daemon (50.85.169.122:53372). Apr 24 23:32:32.950839 sshd[1795]: Accepted publickey for core from 50.85.169.122 port 53372 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:32.952487 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:32.959665 systemd-logind[1569]: New session 4 of user core. Apr 24 23:32:32.966193 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:32:33.072968 sshd[1795]: pam_unix(sshd:session): session closed for user core Apr 24 23:32:33.077449 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:32:33.077871 systemd[1]: sshd@3-91.99.220.32:22-50.85.169.122:53372.service: Deactivated successfully. Apr 24 23:32:33.082504 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:32:33.083709 systemd-logind[1569]: Removed session 4. Apr 24 23:32:33.094084 systemd[1]: Started sshd@4-91.99.220.32:22-50.85.169.122:53374.service - OpenSSH per-connection server daemon (50.85.169.122:53374). Apr 24 23:32:33.221610 sshd[1803]: Accepted publickey for core from 50.85.169.122 port 53374 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:33.223614 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:33.229796 systemd-logind[1569]: New session 5 of user core. Apr 24 23:32:33.236017 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:32:33.338230 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:32:33.338556 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:32:33.356862 sudo[1807]: pam_unix(sudo:session): session closed for user root Apr 24 23:32:33.374557 sshd[1803]: pam_unix(sshd:session): session closed for user core Apr 24 23:32:33.382629 systemd[1]: sshd@4-91.99.220.32:22-50.85.169.122:53374.service: Deactivated successfully. Apr 24 23:32:33.384828 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:32:33.387058 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:32:33.388160 systemd-logind[1569]: Removed session 5. Apr 24 23:32:33.401150 systemd[1]: Started sshd@5-91.99.220.32:22-50.85.169.122:53382.service - OpenSSH per-connection server daemon (50.85.169.122:53382). Apr 24 23:32:33.410991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:32:33.415919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:33.534154 sshd[1812]: Accepted publickey for core from 50.85.169.122 port 53382 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:33.536532 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:33.543741 systemd-logind[1569]: New session 6 of user core. Apr 24 23:32:33.559045 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:32:33.575907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:33.591650 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:32:33.638044 kubelet[1826]: E0424 23:32:33.637867 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:32:33.641079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:32:33.641276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:32:33.662624 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:32:33.663315 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:32:33.668114 sudo[1837]: pam_unix(sudo:session): session closed for user root Apr 24 23:32:33.674548 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:32:33.674935 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:32:33.692757 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:32:33.694317 auditctl[1840]: No rules Apr 24 23:32:33.696500 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:32:33.696762 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:32:33.701618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:32:33.751234 augenrules[1859]: No rules Apr 24 23:32:33.754156 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:32:33.757985 sudo[1836]: pam_unix(sudo:session): session closed for user root Apr 24 23:32:33.775025 sshd[1812]: pam_unix(sshd:session): session closed for user core Apr 24 23:32:33.782533 systemd[1]: sshd@5-91.99.220.32:22-50.85.169.122:53382.service: Deactivated successfully. Apr 24 23:32:33.783740 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:32:33.785472 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:32:33.787843 systemd-logind[1569]: Removed session 6. Apr 24 23:32:33.799103 systemd[1]: Started sshd@6-91.99.220.32:22-50.85.169.122:53384.service - OpenSSH per-connection server daemon (50.85.169.122:53384). Apr 24 23:32:33.922514 sshd[1868]: Accepted publickey for core from 50.85.169.122 port 53384 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:32:33.925479 sshd[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:32:33.930395 systemd-logind[1569]: New session 7 of user core. Apr 24 23:32:33.937239 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:32:34.027981 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:32:34.028266 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:32:34.327109 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:32:34.328947 (dockerd)[1887]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:32:34.577556 dockerd[1887]: time="2026-04-24T23:32:34.577378107Z" level=info msg="Starting up" Apr 24 23:32:34.658107 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1459085909-merged.mount: Deactivated successfully. Apr 24 23:32:34.714328 dockerd[1887]: time="2026-04-24T23:32:34.713949311Z" level=info msg="Loading containers: start." Apr 24 23:32:34.815700 kernel: Initializing XFRM netlink socket Apr 24 23:32:34.902823 systemd-networkd[1246]: docker0: Link UP Apr 24 23:32:34.931823 dockerd[1887]: time="2026-04-24T23:32:34.931632938Z" level=info msg="Loading containers: done." Apr 24 23:32:34.948777 dockerd[1887]: time="2026-04-24T23:32:34.948712633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:32:34.948974 dockerd[1887]: time="2026-04-24T23:32:34.948834276Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:32:34.948974 dockerd[1887]: time="2026-04-24T23:32:34.948959399Z" level=info msg="Daemon has completed initialization" Apr 24 23:32:34.988614 dockerd[1887]: time="2026-04-24T23:32:34.988387266Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:32:34.989350 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:32:35.498837 containerd[1609]: time="2026-04-24T23:32:35.498479039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:32:36.096928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070294039.mount: Deactivated successfully. Apr 24 23:32:37.252926 containerd[1609]: time="2026-04-24T23:32:37.252845199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:37.255611 containerd[1609]: time="2026-04-24T23:32:37.255287685Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 24 23:32:37.256708 containerd[1609]: time="2026-04-24T23:32:37.256623350Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:37.261374 containerd[1609]: time="2026-04-24T23:32:37.261242355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:37.264758 containerd[1609]: time="2026-04-24T23:32:37.263040309Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.764474307s" Apr 24 23:32:37.264758 containerd[1609]: time="2026-04-24T23:32:37.263100950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 24 23:32:37.264758 containerd[1609]: time="2026-04-24T23:32:37.263975366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:32:38.488718 containerd[1609]: time="2026-04-24T23:32:38.487325724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:38.489472 containerd[1609]: time="2026-04-24T23:32:38.489418561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 24 23:32:38.490153 containerd[1609]: time="2026-04-24T23:32:38.490048612Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:38.494716 containerd[1609]: time="2026-04-24T23:32:38.494149244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:38.495587 containerd[1609]: time="2026-04-24T23:32:38.495536788Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.231520821s" Apr 24 23:32:38.495587 containerd[1609]: time="2026-04-24T23:32:38.495584029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 24 23:32:38.496128 containerd[1609]: time="2026-04-24T23:32:38.496099118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:32:39.550714 containerd[1609]: time="2026-04-24T23:32:39.549188356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:39.552910 containerd[1609]: time="2026-04-24T23:32:39.552837776Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 24 23:32:39.554124 containerd[1609]: time="2026-04-24T23:32:39.554054117Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:39.562843 containerd[1609]: time="2026-04-24T23:32:39.562753981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:39.564034 containerd[1609]: time="2026-04-24T23:32:39.563997242Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.067862803s" Apr 24 23:32:39.564160 containerd[1609]: time="2026-04-24T23:32:39.564144205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 24 23:32:39.564786 containerd[1609]: time="2026-04-24T23:32:39.564764215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:32:40.541014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144991410.mount: Deactivated successfully. Apr 24 23:32:40.831738 containerd[1609]: time="2026-04-24T23:32:40.831548804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:40.833077 containerd[1609]: time="2026-04-24T23:32:40.833016588Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 24 23:32:40.834198 containerd[1609]: time="2026-04-24T23:32:40.834145405Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:40.836711 containerd[1609]: time="2026-04-24T23:32:40.836642525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:40.838220 containerd[1609]: time="2026-04-24T23:32:40.838157429Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.273258731s" Apr 24 23:32:40.838220 containerd[1609]: time="2026-04-24T23:32:40.838208789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 24 23:32:40.838739 containerd[1609]: time="2026-04-24T23:32:40.838654316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:32:41.322020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870031548.mount: Deactivated successfully. Apr 24 23:32:42.202541 containerd[1609]: time="2026-04-24T23:32:42.202200546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.204390 containerd[1609]: time="2026-04-24T23:32:42.204263655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 24 23:32:42.205705 containerd[1609]: time="2026-04-24T23:32:42.205312150Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.210193 containerd[1609]: time="2026-04-24T23:32:42.209635211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.211127 containerd[1609]: time="2026-04-24T23:32:42.211086872Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.372374275s" Apr 24 23:32:42.211127 containerd[1609]: time="2026-04-24T23:32:42.211128273Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 24 23:32:42.212582 containerd[1609]: time="2026-04-24T23:32:42.212553213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:32:42.645361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739250804.mount: Deactivated successfully. Apr 24 23:32:42.653411 containerd[1609]: time="2026-04-24T23:32:42.653352297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.654697 containerd[1609]: time="2026-04-24T23:32:42.654642235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 24 23:32:42.655854 containerd[1609]: time="2026-04-24T23:32:42.655467447Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.659589 containerd[1609]: time="2026-04-24T23:32:42.659533305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:42.660417 containerd[1609]: time="2026-04-24T23:32:42.660199234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 447.492699ms" Apr 24 23:32:42.660417 containerd[1609]: time="2026-04-24T23:32:42.660233275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 24 23:32:42.661129 containerd[1609]: time="2026-04-24T23:32:42.661093327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:32:43.167032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206759324.mount: Deactivated successfully. Apr 24 23:32:43.833262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 23:32:43.843045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:44.056700 containerd[1609]: time="2026-04-24T23:32:44.055210241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:44.059792 containerd[1609]: time="2026-04-24T23:32:44.059727499Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 24 23:32:44.063509 containerd[1609]: time="2026-04-24T23:32:44.063439626Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:44.070186 containerd[1609]: time="2026-04-24T23:32:44.070110031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:32:44.073205 containerd[1609]: time="2026-04-24T23:32:44.073046949Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.411919102s" Apr 24 23:32:44.073205 containerd[1609]: time="2026-04-24T23:32:44.073096709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 24 23:32:44.103930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:44.123170 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:32:44.187186 kubelet[2236]: E0424 23:32:44.187132 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:32:44.193007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:32:44.193183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:32:45.830838 update_engine[1573]: I20260424 23:32:45.830752 1573 update_attempter.cc:509] Updating boot flags... Apr 24 23:32:45.877371 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2268) Apr 24 23:32:45.974796 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2268) Apr 24 23:32:49.306209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:49.318452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:49.351491 systemd[1]: Reloading requested from client PID 2285 ('systemctl') (unit session-7.scope)... Apr 24 23:32:49.351661 systemd[1]: Reloading... Apr 24 23:32:49.475373 zram_generator::config[2326]: No configuration found. Apr 24 23:32:49.609652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:32:49.683900 systemd[1]: Reloading finished in 331 ms. Apr 24 23:32:49.733154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:32:49.733478 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:32:49.734083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:49.752837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:49.896387 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:32:49.898168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:49.944159 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:32:49.944159 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:32:49.944159 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:32:49.944775 kubelet[2385]: I0424 23:32:49.944222 2385 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:32:51.068983 kubelet[2385]: I0424 23:32:51.068943 2385 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:32:51.070745 kubelet[2385]: I0424 23:32:51.069452 2385 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:32:51.070745 kubelet[2385]: I0424 23:32:51.069846 2385 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:32:51.106083 kubelet[2385]: E0424 23:32:51.106045 2385 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://91.99.220.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:32:51.107455 kubelet[2385]: I0424 23:32:51.107410 2385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:32:51.115995 kubelet[2385]: E0424 23:32:51.115925 2385 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:32:51.115995 kubelet[2385]: I0424 23:32:51.115983 2385 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:32:51.119241 kubelet[2385]: I0424 23:32:51.119195 2385 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:32:51.119665 kubelet[2385]: I0424 23:32:51.119637 2385 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:32:51.119859 kubelet[2385]: I0424 23:32:51.119666 2385 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-4ca6954963","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:32:51.119998 kubelet[2385]: I0424 23:32:51.119866 2385 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:32:51.119998 kubelet[2385]: I0424 23:32:51.119875 2385 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:32:51.120110 kubelet[2385]: I0424 23:32:51.120062 2385 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:32:51.123533 kubelet[2385]: I0424 23:32:51.123359 2385 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:32:51.123533 kubelet[2385]: I0424 23:32:51.123392 2385 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:32:51.123533 kubelet[2385]: I0424 23:32:51.123420 2385 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:32:51.126017 kubelet[2385]: I0424 23:32:51.125171 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:32:51.127704 kubelet[2385]: E0424 23:32:51.127611 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.220.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-4ca6954963&limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:32:51.128818 kubelet[2385]: E0424 23:32:51.128558 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.220.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:32:51.129199 kubelet[2385]: I0424 23:32:51.129166 2385 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:32:51.130049 kubelet[2385]: I0424 23:32:51.130019 2385 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:32:51.130276 kubelet[2385]: W0424 23:32:51.130257 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:32:51.136343 kubelet[2385]: I0424 23:32:51.136306 2385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:32:51.136469 kubelet[2385]: I0424 23:32:51.136364 2385 server.go:1289] "Started kubelet" Apr 24 23:32:51.139666 kubelet[2385]: I0424 23:32:51.139529 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:32:51.142009 kubelet[2385]: I0424 23:32:51.141865 2385 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:32:51.144173 kubelet[2385]: I0424 23:32:51.144111 2385 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:32:51.148856 kubelet[2385]: I0424 23:32:51.148599 2385 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:32:51.149463 kubelet[2385]: E0424 23:32:51.149438 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:51.153115 kubelet[2385]: I0424 23:32:51.149844 2385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:32:51.153115 kubelet[2385]: I0424 23:32:51.149793 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:32:51.153370 kubelet[2385]: I0424 23:32:51.153345 2385 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:32:51.153413 kubelet[2385]: I0424 23:32:51.151719 2385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:32:51.157702 kubelet[2385]: E0424 23:32:51.155150 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.220.32:6443/api/v1/namespaces/default/events\": dial tcp 91.99.220.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-4ca6954963.18a96eff6350e293 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-4ca6954963,UID:ci-4081-3-6-n-4ca6954963,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-4ca6954963,},FirstTimestamp:2026-04-24 23:32:51.136332435 +0000 UTC m=+1.234932716,LastTimestamp:2026-04-24 23:32:51.136332435 +0000 UTC m=+1.234932716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-4ca6954963,}" Apr 24 23:32:51.157702 kubelet[2385]: I0424 23:32:51.152915 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:32:51.157702 kubelet[2385]: E0424 23:32:51.157255 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.220.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:32:51.158521 kubelet[2385]: I0424 23:32:51.149921 2385 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:32:51.160219 kubelet[2385]: E0424 23:32:51.160181 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4ca6954963?timeout=10s\": dial tcp 91.99.220.32:6443: connect: connection refused" interval="200ms" Apr 24 23:32:51.160557 kubelet[2385]: I0424 23:32:51.152441 2385 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:32:51.160660 kubelet[2385]: I0424 23:32:51.160632 2385 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:32:51.164783 kubelet[2385]: E0424 23:32:51.164752 2385 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:32:51.166030 kubelet[2385]: I0424 23:32:51.166005 2385 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:32:51.176286 kubelet[2385]: I0424 23:32:51.176213 2385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:32:51.176286 kubelet[2385]: I0424 23:32:51.176273 2385 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:32:51.176438 kubelet[2385]: I0424 23:32:51.176305 2385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:32:51.176438 kubelet[2385]: I0424 23:32:51.176313 2385 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:32:51.176438 kubelet[2385]: E0424 23:32:51.176355 2385 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:32:51.186110 kubelet[2385]: E0424 23:32:51.185989 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.220.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:32:51.192063 kubelet[2385]: I0424 23:32:51.192041 2385 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:32:51.192189 kubelet[2385]: I0424 23:32:51.192180 2385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:32:51.192372 kubelet[2385]: I0424 23:32:51.192277 2385 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:32:51.194724 kubelet[2385]: I0424 23:32:51.194555 2385 policy_none.go:49] "None policy: Start" Apr 24 23:32:51.194724 kubelet[2385]: I0424 23:32:51.194585 2385 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:32:51.194724 kubelet[2385]: I0424 23:32:51.194598 2385 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:32:51.199796 kubelet[2385]: E0424 23:32:51.199662 2385 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:32:51.200011 kubelet[2385]: I0424 23:32:51.199972 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:32:51.200011 kubelet[2385]: I0424 23:32:51.199991 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:32:51.201597 kubelet[2385]: I0424 23:32:51.201539 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:32:51.204627 kubelet[2385]: E0424 23:32:51.204592 2385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:32:51.204882 kubelet[2385]: E0424 23:32:51.204819 2385 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:51.287501 kubelet[2385]: E0424 23:32:51.287098 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.291041 kubelet[2385]: E0424 23:32:51.291010 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.296286 kubelet[2385]: E0424 23:32:51.296124 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.304557 kubelet[2385]: I0424 23:32:51.304092 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.304835 kubelet[2385]: E0424 23:32:51.304807 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.220.32:6443/api/v1/nodes\": dial tcp 91.99.220.32:6443: connect: connection refused" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.360965 kubelet[2385]: I0424 23:32:51.360520 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.360965 kubelet[2385]: I0424 23:32:51.360583 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.360965 kubelet[2385]: I0424 23:32:51.360616 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.360965 kubelet[2385]: I0424 23:32:51.360640 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.360965 kubelet[2385]: I0424 23:32:51.360667 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.361372 kubelet[2385]: I0424 23:32:51.360716 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45997494225c65f9cf915e2e4688a128-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-4ca6954963\" (UID: \"45997494225c65f9cf915e2e4688a128\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.361372 kubelet[2385]: I0424 23:32:51.360738 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.361372 kubelet[2385]: I0424 23:32:51.360760 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.361372 kubelet[2385]: I0424 23:32:51.360783 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.361372 kubelet[2385]: E0424 23:32:51.360878 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4ca6954963?timeout=10s\": dial tcp 91.99.220.32:6443: connect: connection refused" interval="400ms" Apr 24 23:32:51.411005 systemd[1]: Started sshd@7-91.99.220.32:22-2.57.122.177:43030.service - OpenSSH per-connection server daemon (2.57.122.177:43030). Apr 24 23:32:51.453702 sshd[2421]: Connection closed by 2.57.122.177 port 43030 Apr 24 23:32:51.452119 systemd[1]: sshd@7-91.99.220.32:22-2.57.122.177:43030.service: Deactivated successfully. Apr 24 23:32:51.510986 kubelet[2385]: I0424 23:32:51.509014 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.512392 kubelet[2385]: E0424 23:32:51.512357 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.220.32:6443/api/v1/nodes\": dial tcp 91.99.220.32:6443: connect: connection refused" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.589020 containerd[1609]: time="2026-04-24T23:32:51.588957458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-4ca6954963,Uid:dfdd80473ac93efbba343464200b2552,Namespace:kube-system,Attempt:0,}" Apr 24 23:32:51.593203 containerd[1609]: time="2026-04-24T23:32:51.592734613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-4ca6954963,Uid:8303ba2256b8bdf3cc35ada754354ff6,Namespace:kube-system,Attempt:0,}" Apr 24 23:32:51.598214 containerd[1609]: time="2026-04-24T23:32:51.598142782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-4ca6954963,Uid:45997494225c65f9cf915e2e4688a128,Namespace:kube-system,Attempt:0,}" Apr 24 23:32:51.762787 kubelet[2385]: E0424 23:32:51.761837 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4ca6954963?timeout=10s\": dial tcp 91.99.220.32:6443: connect: connection refused" interval="800ms" Apr 24 23:32:51.915432 kubelet[2385]: I0424 23:32:51.915395 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:51.915834 kubelet[2385]: E0424 23:32:51.915801 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.220.32:6443/api/v1/nodes\": dial tcp 91.99.220.32:6443: connect: connection refused" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:52.009589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293184770.mount: Deactivated successfully. Apr 24 23:32:52.020548 containerd[1609]: time="2026-04-24T23:32:52.019455233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:32:52.021720 containerd[1609]: time="2026-04-24T23:32:52.021664253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:32:52.023184 containerd[1609]: time="2026-04-24T23:32:52.023142145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:32:52.024784 containerd[1609]: time="2026-04-24T23:32:52.024738639Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:32:52.028064 containerd[1609]: time="2026-04-24T23:32:52.027186260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:32:52.030581 containerd[1609]: time="2026-04-24T23:32:52.028854715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:32:52.031561 containerd[1609]: time="2026-04-24T23:32:52.031463617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 24 23:32:52.032780 containerd[1609]: time="2026-04-24T23:32:52.032513587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:32:52.033463 containerd[1609]: time="2026-04-24T23:32:52.033418754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 440.601461ms" Apr 24 23:32:52.040270 containerd[1609]: time="2026-04-24T23:32:52.040041132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 441.621668ms" Apr 24 23:32:52.045482 containerd[1609]: time="2026-04-24T23:32:52.045436938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.353719ms" Apr 24 23:32:52.197604 containerd[1609]: time="2026-04-24T23:32:52.197410014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:32:52.197604 containerd[1609]: time="2026-04-24T23:32:52.197474535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:32:52.197604 containerd[1609]: time="2026-04-24T23:32:52.197498055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.198322 containerd[1609]: time="2026-04-24T23:32:52.198180981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.200965 containerd[1609]: time="2026-04-24T23:32:52.200722163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:32:52.200965 containerd[1609]: time="2026-04-24T23:32:52.200779044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:32:52.200965 containerd[1609]: time="2026-04-24T23:32:52.200795164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.200965 containerd[1609]: time="2026-04-24T23:32:52.200890165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.203154 containerd[1609]: time="2026-04-24T23:32:52.202921142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:32:52.203154 containerd[1609]: time="2026-04-24T23:32:52.202987743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:32:52.203154 containerd[1609]: time="2026-04-24T23:32:52.203002663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.203154 containerd[1609]: time="2026-04-24T23:32:52.203099264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:32:52.284817 containerd[1609]: time="2026-04-24T23:32:52.284429208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-4ca6954963,Uid:45997494225c65f9cf915e2e4688a128,Namespace:kube-system,Attempt:0,} returns sandbox id \"b35e7a7636c6a3a3485121273f056d8d1a6ccac24fe23cc71ab11e69d9a6c019\"" Apr 24 23:32:52.296180 containerd[1609]: time="2026-04-24T23:32:52.295932148Z" level=info msg="CreateContainer within sandbox \"b35e7a7636c6a3a3485121273f056d8d1a6ccac24fe23cc71ab11e69d9a6c019\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:32:52.299889 containerd[1609]: time="2026-04-24T23:32:52.299630060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-4ca6954963,Uid:dfdd80473ac93efbba343464200b2552,Namespace:kube-system,Attempt:0,} returns sandbox id \"c268c246a432fdf5b0afd03fd000cf032797f9837ad8f31647393c5fbb5c1f51\"" Apr 24 23:32:52.311727 containerd[1609]: time="2026-04-24T23:32:52.309935549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-4ca6954963,Uid:8303ba2256b8bdf3cc35ada754354ff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d2830db54b5157a2f5befbab5c980142c1d2138f518424212a3c6d6c639927\"" Apr 24 23:32:52.315426 containerd[1609]: time="2026-04-24T23:32:52.315376716Z" level=info msg="CreateContainer within sandbox \"c268c246a432fdf5b0afd03fd000cf032797f9837ad8f31647393c5fbb5c1f51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:32:52.317506 containerd[1609]: time="2026-04-24T23:32:52.317465934Z" level=info msg="CreateContainer within sandbox \"82d2830db54b5157a2f5befbab5c980142c1d2138f518424212a3c6d6c639927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:32:52.321005 kubelet[2385]: E0424 23:32:52.320651 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.220.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:32:52.331913 containerd[1609]: time="2026-04-24T23:32:52.331848979Z" level=info msg="CreateContainer within sandbox \"b35e7a7636c6a3a3485121273f056d8d1a6ccac24fe23cc71ab11e69d9a6c019\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5\"" Apr 24 23:32:52.332885 kubelet[2385]: E0424 23:32:52.332847 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.220.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:32:52.333550 containerd[1609]: time="2026-04-24T23:32:52.333513553Z" level=info msg="StartContainer for \"13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5\"" Apr 24 23:32:52.344511 containerd[1609]: time="2026-04-24T23:32:52.344382887Z" level=info msg="CreateContainer within sandbox \"c268c246a432fdf5b0afd03fd000cf032797f9837ad8f31647393c5fbb5c1f51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05faeb02496c70e79d2a605e53d689af04a999b8b25fc227b7f512a587927af6\"" Apr 24 23:32:52.345802 containerd[1609]: time="2026-04-24T23:32:52.345461657Z" level=info msg="StartContainer for \"05faeb02496c70e79d2a605e53d689af04a999b8b25fc227b7f512a587927af6\"" Apr 24 23:32:52.354439 containerd[1609]: time="2026-04-24T23:32:52.354331053Z" level=info msg="CreateContainer within sandbox \"82d2830db54b5157a2f5befbab5c980142c1d2138f518424212a3c6d6c639927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07\"" Apr 24 23:32:52.355262 containerd[1609]: time="2026-04-24T23:32:52.355231581Z" level=info msg="StartContainer for \"8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07\"" Apr 24 23:32:52.444708 containerd[1609]: time="2026-04-24T23:32:52.442317815Z" level=info msg="StartContainer for \"13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5\" returns successfully" Apr 24 23:32:52.451141 containerd[1609]: time="2026-04-24T23:32:52.451057971Z" level=info msg="StartContainer for \"05faeb02496c70e79d2a605e53d689af04a999b8b25fc227b7f512a587927af6\" returns successfully" Apr 24 23:32:52.454704 kubelet[2385]: E0424 23:32:52.454379 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.220.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:32:52.490857 containerd[1609]: time="2026-04-24T23:32:52.490805835Z" level=info msg="StartContainer for \"8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07\" returns successfully" Apr 24 23:32:52.558884 kubelet[2385]: E0424 23:32:52.557637 2385 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.220.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-4ca6954963&limit=500&resourceVersion=0\": dial tcp 91.99.220.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:32:52.563889 kubelet[2385]: E0424 23:32:52.563831 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4ca6954963?timeout=10s\": dial tcp 91.99.220.32:6443: connect: connection refused" interval="1.6s" Apr 24 23:32:52.721721 kubelet[2385]: I0424 23:32:52.719147 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:53.203592 kubelet[2385]: E0424 23:32:53.203015 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:53.210480 kubelet[2385]: E0424 23:32:53.209871 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:53.214294 kubelet[2385]: E0424 23:32:53.213951 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.216728 kubelet[2385]: E0424 23:32:54.216116 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.216728 kubelet[2385]: E0424 23:32:54.216572 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.291916 kubelet[2385]: E0424 23:32:54.291760 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.403694 kubelet[2385]: I0424 23:32:54.401418 2385 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.404208 kubelet[2385]: E0424 23:32:54.403907 2385 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-4ca6954963\": node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.449561 kubelet[2385]: E0424 23:32:54.449521 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.550756 kubelet[2385]: E0424 23:32:54.550167 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.650717 kubelet[2385]: E0424 23:32:54.650538 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.666332 kubelet[2385]: E0424 23:32:54.666296 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:54.750982 kubelet[2385]: E0424 23:32:54.750916 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.851891 kubelet[2385]: E0424 23:32:54.851797 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:54.952307 kubelet[2385]: E0424 23:32:54.952241 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:55.053487 kubelet[2385]: E0424 23:32:55.053390 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:55.153871 kubelet[2385]: E0424 23:32:55.153726 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:55.220111 kubelet[2385]: E0424 23:32:55.220023 2385 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4ca6954963\" not found" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:55.254929 kubelet[2385]: E0424 23:32:55.254810 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4ca6954963\" not found" Apr 24 23:32:55.351614 kubelet[2385]: I0424 23:32:55.350275 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:55.363164 kubelet[2385]: I0424 23:32:55.363124 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:55.375352 kubelet[2385]: I0424 23:32:55.375295 2385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:56.129693 kubelet[2385]: I0424 23:32:56.129611 2385 apiserver.go:52] "Watching apiserver" Apr 24 23:32:56.153947 kubelet[2385]: I0424 23:32:56.153848 2385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:32:56.869929 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-7.scope)... Apr 24 23:32:56.869949 systemd[1]: Reloading... Apr 24 23:32:56.961748 zram_generator::config[2719]: No configuration found. Apr 24 23:32:57.090931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:32:57.173529 systemd[1]: Reloading finished in 303 ms. Apr 24 23:32:57.208797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:57.227316 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:32:57.227939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:57.236137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:32:57.390792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:32:57.405553 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:32:57.457834 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:32:57.457834 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:32:57.457834 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:32:57.457834 kubelet[2774]: I0424 23:32:57.457542 2774 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:32:57.476735 kubelet[2774]: I0424 23:32:57.473126 2774 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:32:57.476735 kubelet[2774]: I0424 23:32:57.473176 2774 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:32:57.476735 kubelet[2774]: I0424 23:32:57.473417 2774 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:32:57.478319 kubelet[2774]: I0424 23:32:57.478253 2774 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:32:57.481844 kubelet[2774]: I0424 23:32:57.481794 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:32:57.486011 kubelet[2774]: E0424 23:32:57.485967 2774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:32:57.486011 kubelet[2774]: I0424 23:32:57.485999 2774 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:32:57.490878 kubelet[2774]: I0424 23:32:57.488839 2774 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:32:57.490878 kubelet[2774]: I0424 23:32:57.489920 2774 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:32:57.490878 kubelet[2774]: I0424 23:32:57.489966 2774 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-4ca6954963","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:32:57.490878 kubelet[2774]: I0424 23:32:57.490215 2774 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490226 2774 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490288 2774 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490454 2774 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490467 2774 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490495 2774 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:32:57.491758 kubelet[2774]: I0424 23:32:57.490515 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:32:57.501264 kubelet[2774]: I0424 23:32:57.501184 2774 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:32:57.503713 kubelet[2774]: I0424 23:32:57.503137 2774 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:32:57.507810 kubelet[2774]: I0424 23:32:57.505993 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:32:57.508207 kubelet[2774]: I0424 23:32:57.508188 2774 server.go:1289] "Started kubelet" Apr 24 23:32:57.508549 kubelet[2774]: I0424 23:32:57.508518 2774 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:32:57.509401 kubelet[2774]: I0424 23:32:57.509137 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:32:57.510299 kubelet[2774]: I0424 23:32:57.510221 2774 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:32:57.512698 kubelet[2774]: I0424 23:32:57.512204 2774 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:32:57.518962 kubelet[2774]: I0424 23:32:57.518935 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:32:57.526465 kubelet[2774]: I0424 23:32:57.526436 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:32:57.527828 kubelet[2774]: I0424 23:32:57.527781 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:32:57.529248 kubelet[2774]: I0424 23:32:57.529225 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:32:57.531716 kubelet[2774]: I0424 23:32:57.531695 2774 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:32:57.536534 kubelet[2774]: E0424 23:32:57.536448 2774 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:32:57.540813 kubelet[2774]: I0424 23:32:57.540380 2774 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:32:57.540813 kubelet[2774]: I0424 23:32:57.540406 2774 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:32:57.540813 kubelet[2774]: I0424 23:32:57.540524 2774 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:32:57.555095 kubelet[2774]: I0424 23:32:57.555035 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:32:57.557988 kubelet[2774]: I0424 23:32:57.557871 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:32:57.557988 kubelet[2774]: I0424 23:32:57.557900 2774 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:32:57.557988 kubelet[2774]: I0424 23:32:57.557921 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:32:57.557988 kubelet[2774]: I0424 23:32:57.557927 2774 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:32:57.558469 kubelet[2774]: E0424 23:32:57.557976 2774 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:32:57.608600 kubelet[2774]: I0424 23:32:57.608570 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608712 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608748 2774 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608899 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608909 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608926 2774 policy_none.go:49] "None policy: Start" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608937 2774 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.608946 2774 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:32:57.609668 kubelet[2774]: I0424 23:32:57.609024 2774 state_mem.go:75] "Updated machine memory state" Apr 24 23:32:57.611483 kubelet[2774]: E0424 23:32:57.610375 2774 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:32:57.611483 kubelet[2774]: I0424 23:32:57.610557 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:32:57.611483 kubelet[2774]: I0424 23:32:57.610570 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:32:57.612113 kubelet[2774]: I0424 23:32:57.612081 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:32:57.613764 kubelet[2774]: E0424 23:32:57.613740 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:32:57.660413 kubelet[2774]: I0424 23:32:57.660349 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.662723 kubelet[2774]: I0424 23:32:57.661002 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.662723 kubelet[2774]: I0424 23:32:57.661452 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.672465 kubelet[2774]: E0424 23:32:57.671405 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.673268 kubelet[2774]: E0424 23:32:57.672728 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-4ca6954963\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.673650 kubelet[2774]: E0424 23:32:57.673430 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.721477 kubelet[2774]: I0424 23:32:57.721288 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.732886 kubelet[2774]: I0424 23:32:57.732840 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.732886 kubelet[2774]: I0424 23:32:57.732888 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.732886 kubelet[2774]: I0424 23:32:57.732914 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.732886 kubelet[2774]: I0424 23:32:57.732947 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45997494225c65f9cf915e2e4688a128-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-4ca6954963\" (UID: \"45997494225c65f9cf915e2e4688a128\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.732886 kubelet[2774]: I0424 23:32:57.732965 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.733632 kubelet[2774]: I0424 23:32:57.732979 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfdd80473ac93efbba343464200b2552-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" (UID: \"dfdd80473ac93efbba343464200b2552\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.733632 kubelet[2774]: I0424 23:32:57.733047 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.733632 kubelet[2774]: I0424 23:32:57.733084 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.733632 kubelet[2774]: I0424 23:32:57.733101 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8303ba2256b8bdf3cc35ada754354ff6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4ca6954963\" (UID: \"8303ba2256b8bdf3cc35ada754354ff6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.734410 kubelet[2774]: I0424 23:32:57.734385 2774 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:57.734741 kubelet[2774]: I0424 23:32:57.734727 2774 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-4ca6954963" Apr 24 23:32:58.502884 kubelet[2774]: I0424 23:32:58.502363 2774 apiserver.go:52] "Watching apiserver" Apr 24 23:32:58.529842 kubelet[2774]: I0424 23:32:58.529721 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:32:58.580737 kubelet[2774]: I0424 23:32:58.579752 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:58.580921 kubelet[2774]: I0424 23:32:58.580873 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:58.589010 kubelet[2774]: E0424 23:32:58.588926 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-4ca6954963\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:58.590361 kubelet[2774]: E0424 23:32:58.590293 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-4ca6954963\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" Apr 24 23:32:58.631185 kubelet[2774]: I0424 23:32:58.630877 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4ca6954963" podStartSLOduration=3.630629178 podStartE2EDuration="3.630629178s" podCreationTimestamp="2026-04-24 23:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:32:58.610093321 +0000 UTC m=+1.199870495" watchObservedRunningTime="2026-04-24 23:32:58.630629178 +0000 UTC m=+1.220406312" Apr 24 23:32:58.648803 kubelet[2774]: I0424 23:32:58.648581 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4ca6954963" podStartSLOduration=3.648560019 podStartE2EDuration="3.648560019s" podCreationTimestamp="2026-04-24 23:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:32:58.631483384 +0000 UTC m=+1.221260518" watchObservedRunningTime="2026-04-24 23:32:58.648560019 +0000 UTC m=+1.238337153" Apr 24 23:32:58.648803 kubelet[2774]: I0424 23:32:58.648698 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4ca6954963" podStartSLOduration=3.64869398 podStartE2EDuration="3.64869398s" podCreationTimestamp="2026-04-24 23:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:32:58.647538692 +0000 UTC m=+1.237315826" watchObservedRunningTime="2026-04-24 23:32:58.64869398 +0000 UTC m=+1.238471114" Apr 24 23:33:02.981064 kubelet[2774]: I0424 23:33:02.981012 2774 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:33:02.982344 containerd[1609]: time="2026-04-24T23:33:02.982248616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:33:02.982697 kubelet[2774]: I0424 23:33:02.982570 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:33:04.081936 kubelet[2774]: I0424 23:33:04.081846 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8005fd69-b9e2-4a9b-bcdc-e82c14560df5-kube-proxy\") pod \"kube-proxy-q66sv\" (UID: \"8005fd69-b9e2-4a9b-bcdc-e82c14560df5\") " pod="kube-system/kube-proxy-q66sv" Apr 24 23:33:04.082667 kubelet[2774]: I0424 23:33:04.081998 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8005fd69-b9e2-4a9b-bcdc-e82c14560df5-xtables-lock\") pod \"kube-proxy-q66sv\" (UID: \"8005fd69-b9e2-4a9b-bcdc-e82c14560df5\") " pod="kube-system/kube-proxy-q66sv" Apr 24 23:33:04.082667 kubelet[2774]: I0424 23:33:04.082061 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8005fd69-b9e2-4a9b-bcdc-e82c14560df5-lib-modules\") pod \"kube-proxy-q66sv\" (UID: \"8005fd69-b9e2-4a9b-bcdc-e82c14560df5\") " pod="kube-system/kube-proxy-q66sv" Apr 24 23:33:04.082667 kubelet[2774]: I0424 23:33:04.082146 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gt8s\" (UniqueName: \"kubernetes.io/projected/8005fd69-b9e2-4a9b-bcdc-e82c14560df5-kube-api-access-9gt8s\") pod \"kube-proxy-q66sv\" (UID: \"8005fd69-b9e2-4a9b-bcdc-e82c14560df5\") " pod="kube-system/kube-proxy-q66sv" Apr 24 23:33:04.282658 kubelet[2774]: I0424 23:33:04.282543 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/853c2534-8921-45b8-b797-cd586230316a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-5h8wz\" (UID: \"853c2534-8921-45b8-b797-cd586230316a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-5h8wz" Apr 24 23:33:04.282658 kubelet[2774]: I0424 23:33:04.282616 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7mmd\" (UniqueName: \"kubernetes.io/projected/853c2534-8921-45b8-b797-cd586230316a-kube-api-access-b7mmd\") pod \"tigera-operator-6bf85f8dd-5h8wz\" (UID: \"853c2534-8921-45b8-b797-cd586230316a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-5h8wz" Apr 24 23:33:04.317091 containerd[1609]: time="2026-04-24T23:33:04.316959679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q66sv,Uid:8005fd69-b9e2-4a9b-bcdc-e82c14560df5,Namespace:kube-system,Attempt:0,}" Apr 24 23:33:04.349211 containerd[1609]: time="2026-04-24T23:33:04.348175487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:04.349211 containerd[1609]: time="2026-04-24T23:33:04.348249368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:04.349211 containerd[1609]: time="2026-04-24T23:33:04.348265888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:04.349211 containerd[1609]: time="2026-04-24T23:33:04.348361288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:04.393378 containerd[1609]: time="2026-04-24T23:33:04.392977249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q66sv,Uid:8005fd69-b9e2-4a9b-bcdc-e82c14560df5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9087a884b13683dd34f99fff9642143ab171110726d5aa5c80749ee70cac84f\"" Apr 24 23:33:04.404514 containerd[1609]: time="2026-04-24T23:33:04.404465231Z" level=info msg="CreateContainer within sandbox \"a9087a884b13683dd34f99fff9642143ab171110726d5aa5c80749ee70cac84f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:33:04.421225 containerd[1609]: time="2026-04-24T23:33:04.421161440Z" level=info msg="CreateContainer within sandbox \"a9087a884b13683dd34f99fff9642143ab171110726d5aa5c80749ee70cac84f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50510a188069146d2a9eba4db865d851990ef6b4f780919bdedb61468d4b3ffc\"" Apr 24 23:33:04.423147 containerd[1609]: time="2026-04-24T23:33:04.421981045Z" level=info msg="StartContainer for \"50510a188069146d2a9eba4db865d851990ef6b4f780919bdedb61468d4b3ffc\"" Apr 24 23:33:04.483687 containerd[1609]: time="2026-04-24T23:33:04.482610731Z" level=info msg="StartContainer for \"50510a188069146d2a9eba4db865d851990ef6b4f780919bdedb61468d4b3ffc\" returns successfully" Apr 24 23:33:04.488266 containerd[1609]: time="2026-04-24T23:33:04.487387637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-5h8wz,Uid:853c2534-8921-45b8-b797-cd586230316a,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:33:04.519761 containerd[1609]: time="2026-04-24T23:33:04.519007728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:04.519761 containerd[1609]: time="2026-04-24T23:33:04.519088848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:04.519761 containerd[1609]: time="2026-04-24T23:33:04.519100368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:04.519761 containerd[1609]: time="2026-04-24T23:33:04.519197329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:04.577258 containerd[1609]: time="2026-04-24T23:33:04.577123761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-5h8wz,Uid:853c2534-8921-45b8-b797-cd586230316a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c918c8c146730c4ed311c83927db4faeca26137b42c545557c41d84a737c2f41\"" Apr 24 23:33:04.580617 containerd[1609]: time="2026-04-24T23:33:04.579217692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:33:04.617513 kubelet[2774]: I0424 23:33:04.616638 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q66sv" podStartSLOduration=1.616609773 podStartE2EDuration="1.616609773s" podCreationTimestamp="2026-04-24 23:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:33:04.616518533 +0000 UTC m=+7.206295667" watchObservedRunningTime="2026-04-24 23:33:04.616609773 +0000 UTC m=+7.206386907" Apr 24 23:33:07.169576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744737731.mount: Deactivated successfully. Apr 24 23:33:11.438510 containerd[1609]: time="2026-04-24T23:33:11.438428600Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:11.439909 containerd[1609]: time="2026-04-24T23:33:11.439863246Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 24 23:33:11.442302 containerd[1609]: time="2026-04-24T23:33:11.441200572Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:11.444630 containerd[1609]: time="2026-04-24T23:33:11.444585346Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:11.445415 containerd[1609]: time="2026-04-24T23:33:11.445368870Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 6.866101938s" Apr 24 23:33:11.445415 containerd[1609]: time="2026-04-24T23:33:11.445414110Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 24 23:33:11.451792 containerd[1609]: time="2026-04-24T23:33:11.451731098Z" level=info msg="CreateContainer within sandbox \"c918c8c146730c4ed311c83927db4faeca26137b42c545557c41d84a737c2f41\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:33:11.472937 containerd[1609]: time="2026-04-24T23:33:11.471046862Z" level=info msg="CreateContainer within sandbox \"c918c8c146730c4ed311c83927db4faeca26137b42c545557c41d84a737c2f41\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea\"" Apr 24 23:33:11.472937 containerd[1609]: time="2026-04-24T23:33:11.471867506Z" level=info msg="StartContainer for \"8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea\"" Apr 24 23:33:11.503474 systemd[1]: run-containerd-runc-k8s.io-8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea-runc.zSy7QY.mount: Deactivated successfully. Apr 24 23:33:11.533176 containerd[1609]: time="2026-04-24T23:33:11.533117573Z" level=info msg="StartContainer for \"8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea\" returns successfully" Apr 24 23:33:17.919854 sudo[1872]: pam_unix(sudo:session): session closed for user root Apr 24 23:33:17.938932 sshd[1868]: pam_unix(sshd:session): session closed for user core Apr 24 23:33:17.948906 systemd[1]: sshd@6-91.99.220.32:22-50.85.169.122:53384.service: Deactivated successfully. Apr 24 23:33:17.956212 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:33:17.958372 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:33:17.963758 systemd-logind[1569]: Removed session 7. Apr 24 23:33:23.659635 kubelet[2774]: I0424 23:33:23.657716 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-5h8wz" podStartSLOduration=12.789696394 podStartE2EDuration="19.6576979s" podCreationTimestamp="2026-04-24 23:33:04 +0000 UTC" firstStartedPulling="2026-04-24 23:33:04.578414568 +0000 UTC m=+7.168191702" lastFinishedPulling="2026-04-24 23:33:11.446416074 +0000 UTC m=+14.036193208" observedRunningTime="2026-04-24 23:33:11.636007903 +0000 UTC m=+14.225785077" watchObservedRunningTime="2026-04-24 23:33:23.6576979 +0000 UTC m=+26.247475034" Apr 24 23:33:23.711071 kubelet[2774]: I0424 23:33:23.711023 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvzt\" (UniqueName: \"kubernetes.io/projected/2eee9e5e-2e53-402f-a0a0-e11e99177c22-kube-api-access-rlvzt\") pod \"calico-typha-88b6599cc-sssp8\" (UID: \"2eee9e5e-2e53-402f-a0a0-e11e99177c22\") " pod="calico-system/calico-typha-88b6599cc-sssp8" Apr 24 23:33:23.711375 kubelet[2774]: I0424 23:33:23.711273 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2eee9e5e-2e53-402f-a0a0-e11e99177c22-typha-certs\") pod \"calico-typha-88b6599cc-sssp8\" (UID: \"2eee9e5e-2e53-402f-a0a0-e11e99177c22\") " pod="calico-system/calico-typha-88b6599cc-sssp8" Apr 24 23:33:23.711375 kubelet[2774]: I0424 23:33:23.711302 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eee9e5e-2e53-402f-a0a0-e11e99177c22-tigera-ca-bundle\") pod \"calico-typha-88b6599cc-sssp8\" (UID: \"2eee9e5e-2e53-402f-a0a0-e11e99177c22\") " pod="calico-system/calico-typha-88b6599cc-sssp8" Apr 24 23:33:23.813699 kubelet[2774]: I0424 23:33:23.811640 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-flexvol-driver-host\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.813699 kubelet[2774]: I0424 23:33:23.811704 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-nodeproc\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.813699 kubelet[2774]: I0424 23:33:23.811731 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/343420f1-570c-4679-9454-8447ffc2c5ec-tigera-ca-bundle\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.813699 kubelet[2774]: I0424 23:33:23.811752 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-var-run-calico\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.813699 kubelet[2774]: I0424 23:33:23.811800 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-bpffs\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814038 kubelet[2774]: I0424 23:33:23.811822 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-sys-fs\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814038 kubelet[2774]: I0424 23:33:23.811861 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/343420f1-570c-4679-9454-8447ffc2c5ec-node-certs\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814038 kubelet[2774]: I0424 23:33:23.811878 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-lib-modules\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814038 kubelet[2774]: I0424 23:33:23.811896 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-var-lib-calico\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814038 kubelet[2774]: I0424 23:33:23.811915 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-policysync\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814175 kubelet[2774]: I0424 23:33:23.811934 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-cni-log-dir\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814175 kubelet[2774]: I0424 23:33:23.811954 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-cni-net-dir\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814175 kubelet[2774]: I0424 23:33:23.811974 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-xtables-lock\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814175 kubelet[2774]: I0424 23:33:23.811993 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/343420f1-570c-4679-9454-8447ffc2c5ec-cni-bin-dir\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.814175 kubelet[2774]: I0424 23:33:23.812015 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlw4\" (UniqueName: \"kubernetes.io/projected/343420f1-570c-4679-9454-8447ffc2c5ec-kube-api-access-zxlw4\") pod \"calico-node-sbqhn\" (UID: \"343420f1-570c-4679-9454-8447ffc2c5ec\") " pod="calico-system/calico-node-sbqhn" Apr 24 23:33:23.899608 kubelet[2774]: E0424 23:33:23.897645 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:23.913293 kubelet[2774]: I0424 23:33:23.913150 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0709c95b-fcee-45d7-b89b-c07c9daffd70-socket-dir\") pod \"csi-node-driver-zdz8r\" (UID: \"0709c95b-fcee-45d7-b89b-c07c9daffd70\") " pod="calico-system/csi-node-driver-zdz8r" Apr 24 23:33:23.913293 kubelet[2774]: I0424 23:33:23.913199 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0709c95b-fcee-45d7-b89b-c07c9daffd70-varrun\") pod \"csi-node-driver-zdz8r\" (UID: \"0709c95b-fcee-45d7-b89b-c07c9daffd70\") " pod="calico-system/csi-node-driver-zdz8r" Apr 24 23:33:23.913293 kubelet[2774]: I0424 23:33:23.913260 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0709c95b-fcee-45d7-b89b-c07c9daffd70-kubelet-dir\") pod \"csi-node-driver-zdz8r\" (UID: \"0709c95b-fcee-45d7-b89b-c07c9daffd70\") " pod="calico-system/csi-node-driver-zdz8r" Apr 24 23:33:23.913472 kubelet[2774]: I0424 23:33:23.913349 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0709c95b-fcee-45d7-b89b-c07c9daffd70-registration-dir\") pod \"csi-node-driver-zdz8r\" (UID: \"0709c95b-fcee-45d7-b89b-c07c9daffd70\") " pod="calico-system/csi-node-driver-zdz8r" Apr 24 23:33:23.913472 kubelet[2774]: I0424 23:33:23.913365 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22w7n\" (UniqueName: \"kubernetes.io/projected/0709c95b-fcee-45d7-b89b-c07c9daffd70-kube-api-access-22w7n\") pod \"csi-node-driver-zdz8r\" (UID: \"0709c95b-fcee-45d7-b89b-c07c9daffd70\") " pod="calico-system/csi-node-driver-zdz8r" Apr 24 23:33:23.921945 kubelet[2774]: E0424 23:33:23.921865 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.921945 kubelet[2774]: W0424 23:33:23.921901 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.921945 kubelet[2774]: E0424 23:33:23.921947 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.926600 kubelet[2774]: E0424 23:33:23.924096 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.926600 kubelet[2774]: W0424 23:33:23.924126 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.926600 kubelet[2774]: E0424 23:33:23.924155 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.926600 kubelet[2774]: E0424 23:33:23.926403 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.926600 kubelet[2774]: W0424 23:33:23.926431 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.926600 kubelet[2774]: E0424 23:33:23.926548 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.927556 kubelet[2774]: E0424 23:33:23.927526 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.927978 kubelet[2774]: W0424 23:33:23.927940 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.928104 kubelet[2774]: E0424 23:33:23.927980 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.930789 kubelet[2774]: E0424 23:33:23.929722 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.930789 kubelet[2774]: W0424 23:33:23.929753 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.930789 kubelet[2774]: E0424 23:33:23.929853 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.930789 kubelet[2774]: E0424 23:33:23.930558 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.930789 kubelet[2774]: W0424 23:33:23.930572 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.930789 kubelet[2774]: E0424 23:33:23.930724 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.933024 kubelet[2774]: E0424 23:33:23.932365 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.933024 kubelet[2774]: W0424 23:33:23.932391 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.933024 kubelet[2774]: E0424 23:33:23.932590 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.934697 kubelet[2774]: E0424 23:33:23.933568 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.934697 kubelet[2774]: W0424 23:33:23.933591 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.934697 kubelet[2774]: E0424 23:33:23.933609 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.934941 kubelet[2774]: E0424 23:33:23.934765 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.934941 kubelet[2774]: W0424 23:33:23.934835 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.934941 kubelet[2774]: E0424 23:33:23.934851 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.937752 kubelet[2774]: E0424 23:33:23.936470 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.937752 kubelet[2774]: W0424 23:33:23.936498 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.937752 kubelet[2774]: E0424 23:33:23.936517 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.937752 kubelet[2774]: E0424 23:33:23.936789 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.937752 kubelet[2774]: W0424 23:33:23.936799 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.937752 kubelet[2774]: E0424 23:33:23.936812 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.938051 kubelet[2774]: E0424 23:33:23.937878 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.938051 kubelet[2774]: W0424 23:33:23.937891 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.938051 kubelet[2774]: E0424 23:33:23.937905 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.939885 kubelet[2774]: E0424 23:33:23.939074 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.939885 kubelet[2774]: W0424 23:33:23.939088 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.939885 kubelet[2774]: E0424 23:33:23.939101 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.941762 kubelet[2774]: E0424 23:33:23.941733 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.941762 kubelet[2774]: W0424 23:33:23.941754 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.941898 kubelet[2774]: E0424 23:33:23.941779 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.959239 kubelet[2774]: E0424 23:33:23.959006 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:23.959239 kubelet[2774]: W0424 23:33:23.959037 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:23.959239 kubelet[2774]: E0424 23:33:23.959064 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:23.973374 containerd[1609]: time="2026-04-24T23:33:23.972793734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-88b6599cc-sssp8,Uid:2eee9e5e-2e53-402f-a0a0-e11e99177c22,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.014982 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.015993 kubelet[2774]: W0424 23:33:24.015010 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.015047 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.015435 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.015993 kubelet[2774]: W0424 23:33:24.015451 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.015465 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.015748 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.015993 kubelet[2774]: W0424 23:33:24.015759 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.015993 kubelet[2774]: E0424 23:33:24.015862 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.017959 kubelet[2774]: E0424 23:33:24.017098 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.017959 kubelet[2774]: W0424 23:33:24.017119 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.017959 kubelet[2774]: E0424 23:33:24.017134 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.018649 kubelet[2774]: E0424 23:33:24.018293 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.018649 kubelet[2774]: W0424 23:33:24.018310 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.018649 kubelet[2774]: E0424 23:33:24.018326 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.019378 kubelet[2774]: E0424 23:33:24.019235 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.019378 kubelet[2774]: W0424 23:33:24.019251 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.019378 kubelet[2774]: E0424 23:33:24.019268 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.020271 kubelet[2774]: E0424 23:33:24.020252 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.020601 kubelet[2774]: W0424 23:33:24.020365 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.020601 kubelet[2774]: E0424 23:33:24.020387 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.021200 kubelet[2774]: E0424 23:33:24.021074 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.021200 kubelet[2774]: W0424 23:33:24.021089 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.021200 kubelet[2774]: E0424 23:33:24.021104 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.021994 kubelet[2774]: E0424 23:33:24.021749 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.021994 kubelet[2774]: W0424 23:33:24.021764 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.021994 kubelet[2774]: E0424 23:33:24.021828 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.022372 kubelet[2774]: E0424 23:33:24.022348 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.022878 containerd[1609]: time="2026-04-24T23:33:24.022610903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:24.022878 containerd[1609]: time="2026-04-24T23:33:24.022682583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:24.022988 kubelet[2774]: W0424 23:33:24.022449 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.022988 kubelet[2774]: E0424 23:33:24.022740 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.023267 containerd[1609]: time="2026-04-24T23:33:24.022694983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:24.023321 kubelet[2774]: E0424 23:33:24.023192 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.023321 kubelet[2774]: W0424 23:33:24.023203 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.023321 kubelet[2774]: E0424 23:33:24.023217 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.023790 containerd[1609]: time="2026-04-24T23:33:24.023590266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:24.023986 kubelet[2774]: E0424 23:33:24.023861 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.023986 kubelet[2774]: W0424 23:33:24.023874 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.023986 kubelet[2774]: E0424 23:33:24.023887 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.024604 kubelet[2774]: E0424 23:33:24.024485 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.024604 kubelet[2774]: W0424 23:33:24.024502 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.024604 kubelet[2774]: E0424 23:33:24.024516 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.025343 kubelet[2774]: E0424 23:33:24.025181 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.025343 kubelet[2774]: W0424 23:33:24.025196 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.025343 kubelet[2774]: E0424 23:33:24.025208 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.025877 kubelet[2774]: E0424 23:33:24.025709 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.025877 kubelet[2774]: W0424 23:33:24.025725 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.025877 kubelet[2774]: E0424 23:33:24.025736 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.026407 kubelet[2774]: E0424 23:33:24.026254 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.026407 kubelet[2774]: W0424 23:33:24.026267 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.026407 kubelet[2774]: E0424 23:33:24.026279 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.026942 kubelet[2774]: E0424 23:33:24.026784 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.026942 kubelet[2774]: W0424 23:33:24.026798 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.026942 kubelet[2774]: E0424 23:33:24.026809 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.027557 kubelet[2774]: E0424 23:33:24.027337 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.027557 kubelet[2774]: W0424 23:33:24.027350 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.027557 kubelet[2774]: E0424 23:33:24.027362 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.027974 kubelet[2774]: E0424 23:33:24.027853 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.027974 kubelet[2774]: W0424 23:33:24.027868 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.027974 kubelet[2774]: E0424 23:33:24.027880 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.028455 kubelet[2774]: E0424 23:33:24.028321 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.028455 kubelet[2774]: W0424 23:33:24.028348 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.028455 kubelet[2774]: E0424 23:33:24.028362 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.028830 kubelet[2774]: E0424 23:33:24.028733 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.028830 kubelet[2774]: W0424 23:33:24.028759 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.028830 kubelet[2774]: E0424 23:33:24.028794 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.029335 kubelet[2774]: E0424 23:33:24.029211 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.029335 kubelet[2774]: W0424 23:33:24.029224 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.029335 kubelet[2774]: E0424 23:33:24.029241 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.029749 kubelet[2774]: E0424 23:33:24.029576 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.029749 kubelet[2774]: W0424 23:33:24.029588 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.029749 kubelet[2774]: E0424 23:33:24.029600 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.030441 kubelet[2774]: E0424 23:33:24.030261 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.030441 kubelet[2774]: W0424 23:33:24.030275 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.030441 kubelet[2774]: E0424 23:33:24.030294 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.030955 kubelet[2774]: E0424 23:33:24.030882 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.030955 kubelet[2774]: W0424 23:33:24.030899 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.030955 kubelet[2774]: E0424 23:33:24.030918 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.048167 kubelet[2774]: E0424 23:33:24.048106 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:24.048167 kubelet[2774]: W0424 23:33:24.048132 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:24.048593 kubelet[2774]: E0424 23:33:24.048379 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:24.079264 containerd[1609]: time="2026-04-24T23:33:24.079204573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-88b6599cc-sssp8,Uid:2eee9e5e-2e53-402f-a0a0-e11e99177c22,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ae8840a01092da961952f3c65b0d5299a4e5937b7b0027b0ca7a2b7da4ccd41\"" Apr 24 23:33:24.085955 containerd[1609]: time="2026-04-24T23:33:24.085785555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:33:24.092037 containerd[1609]: time="2026-04-24T23:33:24.091977216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sbqhn,Uid:343420f1-570c-4679-9454-8447ffc2c5ec,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:24.127269 containerd[1609]: time="2026-04-24T23:33:24.126879773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:24.127269 containerd[1609]: time="2026-04-24T23:33:24.126941373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:24.127269 containerd[1609]: time="2026-04-24T23:33:24.126964413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:24.127269 containerd[1609]: time="2026-04-24T23:33:24.127123294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:24.162744 containerd[1609]: time="2026-04-24T23:33:24.162692853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sbqhn,Uid:343420f1-570c-4679-9454-8447ffc2c5ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\"" Apr 24 23:33:25.566256 kubelet[2774]: E0424 23:33:25.565585 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:25.603570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102763119.mount: Deactivated successfully. Apr 24 23:33:26.230325 containerd[1609]: time="2026-04-24T23:33:26.230266525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 24 23:33:26.231896 containerd[1609]: time="2026-04-24T23:33:26.231175128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:26.233203 containerd[1609]: time="2026-04-24T23:33:26.233164734Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:26.234081 containerd[1609]: time="2026-04-24T23:33:26.234048577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:26.235697 containerd[1609]: time="2026-04-24T23:33:26.235631302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.149795907s" Apr 24 23:33:26.235839 containerd[1609]: time="2026-04-24T23:33:26.235820023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 24 23:33:26.237099 containerd[1609]: time="2026-04-24T23:33:26.237072147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:33:26.258251 containerd[1609]: time="2026-04-24T23:33:26.258145656Z" level=info msg="CreateContainer within sandbox \"0ae8840a01092da961952f3c65b0d5299a4e5937b7b0027b0ca7a2b7da4ccd41\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:33:26.274590 containerd[1609]: time="2026-04-24T23:33:26.274512669Z" level=info msg="CreateContainer within sandbox \"0ae8840a01092da961952f3c65b0d5299a4e5937b7b0027b0ca7a2b7da4ccd41\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00eed8c6f8fd355f298c63185bd738d9d21055391c34638448183a8098800777\"" Apr 24 23:33:26.276127 containerd[1609]: time="2026-04-24T23:33:26.275847714Z" level=info msg="StartContainer for \"00eed8c6f8fd355f298c63185bd738d9d21055391c34638448183a8098800777\"" Apr 24 23:33:26.353908 containerd[1609]: time="2026-04-24T23:33:26.353823608Z" level=info msg="StartContainer for \"00eed8c6f8fd355f298c63185bd738d9d21055391c34638448183a8098800777\" returns successfully" Apr 24 23:33:26.725692 kubelet[2774]: E0424 23:33:26.725634 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.726306 kubelet[2774]: W0424 23:33:26.725763 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.726306 kubelet[2774]: E0424 23:33:26.725795 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.728055 kubelet[2774]: E0424 23:33:26.727829 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.728055 kubelet[2774]: W0424 23:33:26.727878 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.728055 kubelet[2774]: E0424 23:33:26.727955 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.728936 kubelet[2774]: E0424 23:33:26.728910 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.728936 kubelet[2774]: W0424 23:33:26.728939 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.729052 kubelet[2774]: E0424 23:33:26.728956 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.731003 kubelet[2774]: E0424 23:33:26.730973 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.731003 kubelet[2774]: W0424 23:33:26.730997 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.731134 kubelet[2774]: E0424 23:33:26.731023 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.732846 kubelet[2774]: E0424 23:33:26.732802 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.732846 kubelet[2774]: W0424 23:33:26.732836 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.732846 kubelet[2774]: E0424 23:33:26.732856 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.734221 kubelet[2774]: E0424 23:33:26.734182 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.734376 kubelet[2774]: W0424 23:33:26.734212 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.734430 kubelet[2774]: E0424 23:33:26.734384 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.735429 kubelet[2774]: E0424 23:33:26.735378 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.736231 kubelet[2774]: W0424 23:33:26.736206 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.736292 kubelet[2774]: E0424 23:33:26.736234 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.736531 kubelet[2774]: E0424 23:33:26.736516 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.736531 kubelet[2774]: W0424 23:33:26.736528 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.736596 kubelet[2774]: E0424 23:33:26.736537 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.737315 kubelet[2774]: E0424 23:33:26.737294 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.737315 kubelet[2774]: W0424 23:33:26.737312 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.737409 kubelet[2774]: E0424 23:33:26.737326 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.737805 kubelet[2774]: E0424 23:33:26.737785 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.737805 kubelet[2774]: W0424 23:33:26.737800 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.738018 kubelet[2774]: E0424 23:33:26.737815 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.738605 kubelet[2774]: E0424 23:33:26.738584 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.738605 kubelet[2774]: W0424 23:33:26.738601 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.739810 kubelet[2774]: E0424 23:33:26.738781 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.740112 kubelet[2774]: E0424 23:33:26.740091 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.740175 kubelet[2774]: W0424 23:33:26.740110 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.740175 kubelet[2774]: E0424 23:33:26.740127 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.740434 kubelet[2774]: E0424 23:33:26.740409 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.740434 kubelet[2774]: W0424 23:33:26.740429 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.740510 kubelet[2774]: E0424 23:33:26.740441 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.741144 kubelet[2774]: E0424 23:33:26.740953 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.741144 kubelet[2774]: W0424 23:33:26.740966 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.741260 kubelet[2774]: E0424 23:33:26.741189 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.744590 kubelet[2774]: E0424 23:33:26.744546 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.744590 kubelet[2774]: W0424 23:33:26.744577 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.744590 kubelet[2774]: E0424 23:33:26.744600 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.746535 kubelet[2774]: E0424 23:33:26.746352 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.746751 kubelet[2774]: W0424 23:33:26.746387 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.746792 kubelet[2774]: E0424 23:33:26.746768 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.749206 kubelet[2774]: E0424 23:33:26.749168 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.749206 kubelet[2774]: W0424 23:33:26.749195 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.749459 kubelet[2774]: E0424 23:33:26.749217 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.749558 kubelet[2774]: E0424 23:33:26.749542 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.749558 kubelet[2774]: W0424 23:33:26.749557 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.749627 kubelet[2774]: E0424 23:33:26.749569 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.749878 kubelet[2774]: E0424 23:33:26.749861 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.749878 kubelet[2774]: W0424 23:33:26.749874 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.749878 kubelet[2774]: E0424 23:33:26.749885 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.750421 kubelet[2774]: E0424 23:33:26.750403 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.750421 kubelet[2774]: W0424 23:33:26.750418 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.750501 kubelet[2774]: E0424 23:33:26.750429 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.751061 kubelet[2774]: E0424 23:33:26.751041 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.751061 kubelet[2774]: W0424 23:33:26.751058 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.751141 kubelet[2774]: E0424 23:33:26.751069 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.751998 kubelet[2774]: E0424 23:33:26.751978 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.751998 kubelet[2774]: W0424 23:33:26.751995 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.752108 kubelet[2774]: E0424 23:33:26.752009 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.754181 kubelet[2774]: E0424 23:33:26.754141 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.754181 kubelet[2774]: W0424 23:33:26.754172 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.754311 kubelet[2774]: E0424 23:33:26.754190 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.754423 kubelet[2774]: E0424 23:33:26.754408 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.754469 kubelet[2774]: W0424 23:33:26.754424 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.754469 kubelet[2774]: E0424 23:33:26.754434 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.754618 kubelet[2774]: E0424 23:33:26.754599 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.754618 kubelet[2774]: W0424 23:33:26.754617 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.754698 kubelet[2774]: E0424 23:33:26.754626 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.756200 kubelet[2774]: E0424 23:33:26.756169 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.756200 kubelet[2774]: W0424 23:33:26.756190 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.756360 kubelet[2774]: E0424 23:33:26.756215 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.757849 kubelet[2774]: E0424 23:33:26.756428 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.757849 kubelet[2774]: W0424 23:33:26.756442 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.757849 kubelet[2774]: E0424 23:33:26.756454 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.757849 kubelet[2774]: E0424 23:33:26.756590 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.757849 kubelet[2774]: W0424 23:33:26.756596 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.757849 kubelet[2774]: E0424 23:33:26.756603 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.760655 kubelet[2774]: E0424 23:33:26.760602 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.760655 kubelet[2774]: W0424 23:33:26.760640 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.760655 kubelet[2774]: E0424 23:33:26.760665 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.761491 kubelet[2774]: E0424 23:33:26.761247 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.761491 kubelet[2774]: W0424 23:33:26.761263 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.761491 kubelet[2774]: E0424 23:33:26.761277 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.762936 kubelet[2774]: E0424 23:33:26.762784 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.762936 kubelet[2774]: W0424 23:33:26.762808 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.762936 kubelet[2774]: E0424 23:33:26.762825 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.763487 kubelet[2774]: E0424 23:33:26.763390 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.765394 kubelet[2774]: W0424 23:33:26.763922 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.765394 kubelet[2774]: E0424 23:33:26.763957 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:26.766486 kubelet[2774]: E0424 23:33:26.766464 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:26.768171 kubelet[2774]: W0424 23:33:26.767115 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:26.768171 kubelet[2774]: E0424 23:33:26.767157 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.559381 kubelet[2774]: E0424 23:33:27.559170 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:27.670257 kubelet[2774]: I0424 23:33:27.669969 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:33:27.752160 kubelet[2774]: E0424 23:33:27.751805 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.752160 kubelet[2774]: W0424 23:33:27.751836 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.752160 kubelet[2774]: E0424 23:33:27.751864 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.753180 kubelet[2774]: E0424 23:33:27.752299 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.753180 kubelet[2774]: W0424 23:33:27.752310 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.753180 kubelet[2774]: E0424 23:33:27.752321 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.755406 kubelet[2774]: E0424 23:33:27.755379 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.755406 kubelet[2774]: W0424 23:33:27.755403 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.755559 kubelet[2774]: E0424 23:33:27.755425 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.755966 kubelet[2774]: E0424 23:33:27.755950 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.756036 kubelet[2774]: W0424 23:33:27.755967 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.756036 kubelet[2774]: E0424 23:33:27.755979 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.756392 kubelet[2774]: E0424 23:33:27.756371 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.756392 kubelet[2774]: W0424 23:33:27.756389 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.756493 kubelet[2774]: E0424 23:33:27.756400 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.756589 kubelet[2774]: E0424 23:33:27.756567 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.756589 kubelet[2774]: W0424 23:33:27.756579 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.756589 kubelet[2774]: E0424 23:33:27.756588 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.757202 kubelet[2774]: E0424 23:33:27.757179 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.757594 kubelet[2774]: W0424 23:33:27.757217 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.757594 kubelet[2774]: E0424 23:33:27.757230 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.757885 kubelet[2774]: E0424 23:33:27.757685 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.757885 kubelet[2774]: W0424 23:33:27.757875 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.758157 kubelet[2774]: E0424 23:33:27.757889 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.758373 kubelet[2774]: E0424 23:33:27.758343 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.758373 kubelet[2774]: W0424 23:33:27.758359 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.758373 kubelet[2774]: E0424 23:33:27.758370 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.758919 kubelet[2774]: E0424 23:33:27.758901 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.759009 kubelet[2774]: W0424 23:33:27.758919 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.759009 kubelet[2774]: E0424 23:33:27.758932 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.759431 kubelet[2774]: E0424 23:33:27.759413 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.759431 kubelet[2774]: W0424 23:33:27.759431 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.759524 kubelet[2774]: E0424 23:33:27.759443 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.759892 kubelet[2774]: E0424 23:33:27.759808 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.759892 kubelet[2774]: W0424 23:33:27.759893 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.760071 kubelet[2774]: E0424 23:33:27.759905 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.760401 kubelet[2774]: E0424 23:33:27.760387 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.760401 kubelet[2774]: W0424 23:33:27.760401 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.760657 kubelet[2774]: E0424 23:33:27.760412 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.760915 kubelet[2774]: E0424 23:33:27.760893 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.760915 kubelet[2774]: W0424 23:33:27.760910 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.761024 kubelet[2774]: E0424 23:33:27.760923 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.761604 kubelet[2774]: E0424 23:33:27.761579 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.761604 kubelet[2774]: W0424 23:33:27.761603 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.761793 kubelet[2774]: E0424 23:33:27.761616 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.762271 kubelet[2774]: E0424 23:33:27.762243 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.762271 kubelet[2774]: W0424 23:33:27.762261 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.762271 kubelet[2774]: E0424 23:33:27.762273 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.762992 kubelet[2774]: E0424 23:33:27.762971 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.763215 kubelet[2774]: W0424 23:33:27.762991 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.763215 kubelet[2774]: E0424 23:33:27.763041 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.763642 kubelet[2774]: E0424 23:33:27.763518 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.763642 kubelet[2774]: W0424 23:33:27.763534 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.763642 kubelet[2774]: E0424 23:33:27.763552 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.763979 kubelet[2774]: E0424 23:33:27.763966 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.764118 kubelet[2774]: W0424 23:33:27.764068 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.764313 kubelet[2774]: E0424 23:33:27.764182 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.764477 kubelet[2774]: E0424 23:33:27.764452 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.764872 kubelet[2774]: W0424 23:33:27.764463 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.764872 kubelet[2774]: E0424 23:33:27.764605 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.765284 kubelet[2774]: E0424 23:33:27.765206 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.765446 kubelet[2774]: W0424 23:33:27.765425 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.765557 kubelet[2774]: E0424 23:33:27.765539 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.766235 kubelet[2774]: E0424 23:33:27.766050 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.766235 kubelet[2774]: W0424 23:33:27.766105 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.766235 kubelet[2774]: E0424 23:33:27.766119 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.766802 kubelet[2774]: E0424 23:33:27.766622 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.766802 kubelet[2774]: W0424 23:33:27.766637 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.766802 kubelet[2774]: E0424 23:33:27.766650 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.767256 kubelet[2774]: E0424 23:33:27.767121 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.767256 kubelet[2774]: W0424 23:33:27.767147 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.767256 kubelet[2774]: E0424 23:33:27.767160 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.767516 kubelet[2774]: E0424 23:33:27.767503 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.767639 kubelet[2774]: W0424 23:33:27.767625 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.767868 kubelet[2774]: E0424 23:33:27.767703 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.768032 kubelet[2774]: E0424 23:33:27.768020 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.768170 kubelet[2774]: W0424 23:33:27.768104 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.768290 kubelet[2774]: E0424 23:33:27.768221 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.768850 kubelet[2774]: E0424 23:33:27.768644 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.768850 kubelet[2774]: W0424 23:33:27.768657 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.768850 kubelet[2774]: E0424 23:33:27.768669 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.769161 kubelet[2774]: E0424 23:33:27.769147 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.769428 kubelet[2774]: W0424 23:33:27.769266 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.769428 kubelet[2774]: E0424 23:33:27.769310 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.769957 kubelet[2774]: E0424 23:33:27.769665 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.769957 kubelet[2774]: W0424 23:33:27.769810 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.769957 kubelet[2774]: E0424 23:33:27.769825 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.770843 kubelet[2774]: E0424 23:33:27.770828 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.771071 kubelet[2774]: W0424 23:33:27.770922 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.771071 kubelet[2774]: E0424 23:33:27.770940 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.771524 kubelet[2774]: E0424 23:33:27.771488 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.771524 kubelet[2774]: W0424 23:33:27.771504 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.771524 kubelet[2774]: E0424 23:33:27.771517 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.772150 kubelet[2774]: E0424 23:33:27.772130 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.772150 kubelet[2774]: W0424 23:33:27.772149 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.772238 kubelet[2774]: E0424 23:33:27.772163 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.773296 kubelet[2774]: E0424 23:33:27.773276 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:33:27.773296 kubelet[2774]: W0424 23:33:27.773294 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:33:27.773388 kubelet[2774]: E0424 23:33:27.773309 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:33:27.783439 containerd[1609]: time="2026-04-24T23:33:27.783386721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:27.785009 containerd[1609]: time="2026-04-24T23:33:27.784952846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 24 23:33:27.786208 containerd[1609]: time="2026-04-24T23:33:27.786147010Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:27.789937 containerd[1609]: time="2026-04-24T23:33:27.789828902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:27.791465 containerd[1609]: time="2026-04-24T23:33:27.791400867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.55418936s" Apr 24 23:33:27.791465 containerd[1609]: time="2026-04-24T23:33:27.791450347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 24 23:33:27.796445 containerd[1609]: time="2026-04-24T23:33:27.796402963Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:33:27.815444 containerd[1609]: time="2026-04-24T23:33:27.815307744Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a\"" Apr 24 23:33:27.819322 containerd[1609]: time="2026-04-24T23:33:27.819164636Z" level=info msg="StartContainer for \"bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a\"" Apr 24 23:33:27.861851 systemd[1]: run-containerd-runc-k8s.io-bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a-runc.uLpMHr.mount: Deactivated successfully. Apr 24 23:33:27.895995 containerd[1609]: time="2026-04-24T23:33:27.895945403Z" level=info msg="StartContainer for \"bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a\" returns successfully" Apr 24 23:33:28.055756 containerd[1609]: time="2026-04-24T23:33:28.055326755Z" level=info msg="shim disconnected" id=bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a namespace=k8s.io Apr 24 23:33:28.055756 containerd[1609]: time="2026-04-24T23:33:28.055402675Z" level=warning msg="cleaning up after shim disconnected" id=bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a namespace=k8s.io Apr 24 23:33:28.055756 containerd[1609]: time="2026-04-24T23:33:28.055420915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:33:28.247447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd6ca5044561908b859c8b7def66bc38c81f9650905870b0ff8aa7930e2cf65a-rootfs.mount: Deactivated successfully. Apr 24 23:33:28.677737 containerd[1609]: time="2026-04-24T23:33:28.677360934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:33:28.715669 kubelet[2774]: I0424 23:33:28.712693 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-88b6599cc-sssp8" podStartSLOduration=3.557084719 podStartE2EDuration="5.712664086s" podCreationTimestamp="2026-04-24 23:33:23 +0000 UTC" firstStartedPulling="2026-04-24 23:33:24.08138062 +0000 UTC m=+26.671157714" lastFinishedPulling="2026-04-24 23:33:26.236959947 +0000 UTC m=+28.826737081" observedRunningTime="2026-04-24 23:33:26.743047118 +0000 UTC m=+29.332824292" watchObservedRunningTime="2026-04-24 23:33:28.712664086 +0000 UTC m=+31.302441220" Apr 24 23:33:29.560591 kubelet[2774]: E0424 23:33:29.558768 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:30.289745 kubelet[2774]: I0424 23:33:30.288344 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:33:31.559784 kubelet[2774]: E0424 23:33:31.559725 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:33.561876 kubelet[2774]: E0424 23:33:33.561338 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:34.792934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014131346.mount: Deactivated successfully. Apr 24 23:33:34.822170 containerd[1609]: time="2026-04-24T23:33:34.821317614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:34.822753 containerd[1609]: time="2026-04-24T23:33:34.822711658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 24 23:33:34.823341 containerd[1609]: time="2026-04-24T23:33:34.823304260Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:34.828358 containerd[1609]: time="2026-04-24T23:33:34.828310875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:34.829183 containerd[1609]: time="2026-04-24T23:33:34.829145237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.151734583s" Apr 24 23:33:34.829330 containerd[1609]: time="2026-04-24T23:33:34.829312038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 24 23:33:34.835328 containerd[1609]: time="2026-04-24T23:33:34.835282616Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:33:34.858786 containerd[1609]: time="2026-04-24T23:33:34.858717726Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63\"" Apr 24 23:33:34.862217 containerd[1609]: time="2026-04-24T23:33:34.861401054Z" level=info msg="StartContainer for \"8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63\"" Apr 24 23:33:34.928181 containerd[1609]: time="2026-04-24T23:33:34.928125373Z" level=info msg="StartContainer for \"8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63\" returns successfully" Apr 24 23:33:35.225023 containerd[1609]: time="2026-04-24T23:33:35.224712895Z" level=info msg="shim disconnected" id=8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63 namespace=k8s.io Apr 24 23:33:35.225023 containerd[1609]: time="2026-04-24T23:33:35.224791655Z" level=warning msg="cleaning up after shim disconnected" id=8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63 namespace=k8s.io Apr 24 23:33:35.225023 containerd[1609]: time="2026-04-24T23:33:35.224805535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:33:35.562844 kubelet[2774]: E0424 23:33:35.560953 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:35.704971 containerd[1609]: time="2026-04-24T23:33:35.704886719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:33:35.795359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bcc5d010fbbdb43a6256d0a62f2ea16e9f224708aa5d985b132a07aaa79de63-rootfs.mount: Deactivated successfully. Apr 24 23:33:37.560627 kubelet[2774]: E0424 23:33:37.560544 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:39.560140 kubelet[2774]: E0424 23:33:39.559662 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdz8r" podUID="0709c95b-fcee-45d7-b89b-c07c9daffd70" Apr 24 23:33:39.697188 containerd[1609]: time="2026-04-24T23:33:39.697065064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:39.699348 containerd[1609]: time="2026-04-24T23:33:39.699219364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 24 23:33:39.700596 containerd[1609]: time="2026-04-24T23:33:39.700534783Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:39.706209 containerd[1609]: time="2026-04-24T23:33:39.706062047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:39.708111 containerd[1609]: time="2026-04-24T23:33:39.707819766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.002867447s" Apr 24 23:33:39.708111 containerd[1609]: time="2026-04-24T23:33:39.707874003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 24 23:33:39.719296 containerd[1609]: time="2026-04-24T23:33:39.719011807Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:33:39.740192 containerd[1609]: time="2026-04-24T23:33:39.739886561Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3\"" Apr 24 23:33:39.741310 containerd[1609]: time="2026-04-24T23:33:39.741029508Z" level=info msg="StartContainer for \"5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3\"" Apr 24 23:33:39.778331 systemd[1]: run-containerd-runc-k8s.io-5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3-runc.UI1R0t.mount: Deactivated successfully. Apr 24 23:33:39.816104 containerd[1609]: time="2026-04-24T23:33:39.815434222Z" level=info msg="StartContainer for \"5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3\" returns successfully" Apr 24 23:33:40.491715 kubelet[2774]: I0424 23:33:40.490934 2774 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:33:40.550014 containerd[1609]: time="2026-04-24T23:33:40.548921871Z" level=info msg="shim disconnected" id=5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3 namespace=k8s.io Apr 24 23:33:40.550014 containerd[1609]: time="2026-04-24T23:33:40.548981229Z" level=warning msg="cleaning up after shim disconnected" id=5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3 namespace=k8s.io Apr 24 23:33:40.550014 containerd[1609]: time="2026-04-24T23:33:40.548989628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:33:40.675483 kubelet[2774]: I0424 23:33:40.675416 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f715f72-d714-4da9-bb23-778599741117-config\") pod \"goldmane-5b85766d88-6nj4m\" (UID: \"3f715f72-d714-4da9-bb23-778599741117\") " pod="calico-system/goldmane-5b85766d88-6nj4m" Apr 24 23:33:40.675483 kubelet[2774]: I0424 23:33:40.675475 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/851f778d-8cd6-4676-b0f1-43ba69e059c5-tigera-ca-bundle\") pod \"calico-kube-controllers-699cff59bd-hvnsr\" (UID: \"851f778d-8cd6-4676-b0f1-43ba69e059c5\") " pod="calico-system/calico-kube-controllers-699cff59bd-hvnsr" Apr 24 23:33:40.675483 kubelet[2774]: I0424 23:33:40.675501 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxlt4\" (UniqueName: \"kubernetes.io/projected/7a6d9175-1251-46ea-98c6-feb36b7f3803-kube-api-access-qxlt4\") pod \"coredns-674b8bbfcf-v5q9h\" (UID: \"7a6d9175-1251-46ea-98c6-feb36b7f3803\") " pod="kube-system/coredns-674b8bbfcf-v5q9h" Apr 24 23:33:40.676390 kubelet[2774]: I0424 23:33:40.675526 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d475154-2c3b-4970-8d6a-d57a93475a21-calico-apiserver-certs\") pod \"calico-apiserver-85db46756d-zpn2f\" (UID: \"8d475154-2c3b-4970-8d6a-d57a93475a21\") " pod="calico-system/calico-apiserver-85db46756d-zpn2f" Apr 24 23:33:40.676390 kubelet[2774]: I0424 23:33:40.675549 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2f6f\" (UniqueName: \"kubernetes.io/projected/3f715f72-d714-4da9-bb23-778599741117-kube-api-access-k2f6f\") pod \"goldmane-5b85766d88-6nj4m\" (UID: \"3f715f72-d714-4da9-bb23-778599741117\") " pod="calico-system/goldmane-5b85766d88-6nj4m" Apr 24 23:33:40.676390 kubelet[2774]: I0424 23:33:40.675626 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6d9175-1251-46ea-98c6-feb36b7f3803-config-volume\") pod \"coredns-674b8bbfcf-v5q9h\" (UID: \"7a6d9175-1251-46ea-98c6-feb36b7f3803\") " pod="kube-system/coredns-674b8bbfcf-v5q9h" Apr 24 23:33:40.676390 kubelet[2774]: I0424 23:33:40.675667 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f715f72-d714-4da9-bb23-778599741117-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-6nj4m\" (UID: \"3f715f72-d714-4da9-bb23-778599741117\") " pod="calico-system/goldmane-5b85766d88-6nj4m" Apr 24 23:33:40.676390 kubelet[2774]: I0424 23:33:40.675725 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3f715f72-d714-4da9-bb23-778599741117-goldmane-key-pair\") pod \"goldmane-5b85766d88-6nj4m\" (UID: \"3f715f72-d714-4da9-bb23-778599741117\") " pod="calico-system/goldmane-5b85766d88-6nj4m" Apr 24 23:33:40.676634 kubelet[2774]: I0424 23:33:40.675752 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qwv\" (UniqueName: \"kubernetes.io/projected/851f778d-8cd6-4676-b0f1-43ba69e059c5-kube-api-access-88qwv\") pod \"calico-kube-controllers-699cff59bd-hvnsr\" (UID: \"851f778d-8cd6-4676-b0f1-43ba69e059c5\") " pod="calico-system/calico-kube-controllers-699cff59bd-hvnsr" Apr 24 23:33:40.676634 kubelet[2774]: I0424 23:33:40.675777 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bl8\" (UniqueName: \"kubernetes.io/projected/9b8327f6-21ad-4747-ac73-b5c291efc011-kube-api-access-f7bl8\") pod \"calico-apiserver-85db46756d-tx2k6\" (UID: \"9b8327f6-21ad-4747-ac73-b5c291efc011\") " pod="calico-system/calico-apiserver-85db46756d-tx2k6" Apr 24 23:33:40.676634 kubelet[2774]: I0424 23:33:40.675798 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhz2\" (UniqueName: \"kubernetes.io/projected/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-kube-api-access-gwhz2\") pod \"whisker-7564959958-dsnpn\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " pod="calico-system/whisker-7564959958-dsnpn" Apr 24 23:33:40.676634 kubelet[2774]: I0424 23:33:40.675825 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fee72518-fd5a-4c26-81da-aea772642d55-config-volume\") pod \"coredns-674b8bbfcf-w9lds\" (UID: \"fee72518-fd5a-4c26-81da-aea772642d55\") " pod="kube-system/coredns-674b8bbfcf-w9lds" Apr 24 23:33:40.676634 kubelet[2774]: I0424 23:33:40.675849 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-ca-bundle\") pod \"whisker-7564959958-dsnpn\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " pod="calico-system/whisker-7564959958-dsnpn" Apr 24 23:33:40.676879 kubelet[2774]: I0424 23:33:40.675878 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmf9v\" (UniqueName: \"kubernetes.io/projected/8d475154-2c3b-4970-8d6a-d57a93475a21-kube-api-access-vmf9v\") pod \"calico-apiserver-85db46756d-zpn2f\" (UID: \"8d475154-2c3b-4970-8d6a-d57a93475a21\") " pod="calico-system/calico-apiserver-85db46756d-zpn2f" Apr 24 23:33:40.676879 kubelet[2774]: I0424 23:33:40.675903 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-nginx-config\") pod \"whisker-7564959958-dsnpn\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " pod="calico-system/whisker-7564959958-dsnpn" Apr 24 23:33:40.676879 kubelet[2774]: I0424 23:33:40.675928 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b8327f6-21ad-4747-ac73-b5c291efc011-calico-apiserver-certs\") pod \"calico-apiserver-85db46756d-tx2k6\" (UID: \"9b8327f6-21ad-4747-ac73-b5c291efc011\") " pod="calico-system/calico-apiserver-85db46756d-tx2k6" Apr 24 23:33:40.676879 kubelet[2774]: I0424 23:33:40.675951 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99c9p\" (UniqueName: \"kubernetes.io/projected/fee72518-fd5a-4c26-81da-aea772642d55-kube-api-access-99c9p\") pod \"coredns-674b8bbfcf-w9lds\" (UID: \"fee72518-fd5a-4c26-81da-aea772642d55\") " pod="kube-system/coredns-674b8bbfcf-w9lds" Apr 24 23:33:40.676879 kubelet[2774]: I0424 23:33:40.675971 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-backend-key-pair\") pod \"whisker-7564959958-dsnpn\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " pod="calico-system/whisker-7564959958-dsnpn" Apr 24 23:33:40.735293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba5b57c19869c107d72ac63b2c8fdf71913cd12975ca98deb731ea7e8bbfcd3-rootfs.mount: Deactivated successfully. Apr 24 23:33:40.853355 containerd[1609]: time="2026-04-24T23:33:40.850721182Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:33:40.890794 containerd[1609]: time="2026-04-24T23:33:40.890732783Z" level=info msg="CreateContainer within sandbox \"f2803a96629e2b0b94873839609d573a42f7d874ed79a67bf3f8cda123e292ce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"56715b9952f762cddf65b1503590b917e9dd9d12f12e8dddca8fb68c68f48c30\"" Apr 24 23:33:40.893712 containerd[1609]: time="2026-04-24T23:33:40.892164918Z" level=info msg="StartContainer for \"56715b9952f762cddf65b1503590b917e9dd9d12f12e8dddca8fb68c68f48c30\"" Apr 24 23:33:40.897603 containerd[1609]: time="2026-04-24T23:33:40.897559196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9lds,Uid:fee72518-fd5a-4c26-81da-aea772642d55,Namespace:kube-system,Attempt:0,}" Apr 24 23:33:40.900217 containerd[1609]: time="2026-04-24T23:33:40.900176878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85db46756d-zpn2f,Uid:8d475154-2c3b-4970-8d6a-d57a93475a21,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:40.903008 containerd[1609]: time="2026-04-24T23:33:40.902705244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5q9h,Uid:7a6d9175-1251-46ea-98c6-feb36b7f3803,Namespace:kube-system,Attempt:0,}" Apr 24 23:33:40.906546 containerd[1609]: time="2026-04-24T23:33:40.906502034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85db46756d-tx2k6,Uid:9b8327f6-21ad-4747-ac73-b5c291efc011,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:40.924836 containerd[1609]: time="2026-04-24T23:33:40.924779772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699cff59bd-hvnsr,Uid:851f778d-8cd6-4676-b0f1-43ba69e059c5,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:40.925797 containerd[1609]: time="2026-04-24T23:33:40.925390104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7564959958-dsnpn,Uid:0c4273ae-37c8-4148-b3e5-6cdb829a1a8e,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:40.931918 containerd[1609]: time="2026-04-24T23:33:40.931513829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6nj4m,Uid:3f715f72-d714-4da9-bb23-778599741117,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:40.989735 containerd[1609]: time="2026-04-24T23:33:40.988747896Z" level=info msg="StartContainer for \"56715b9952f762cddf65b1503590b917e9dd9d12f12e8dddca8fb68c68f48c30\" returns successfully" Apr 24 23:33:41.591744 containerd[1609]: time="2026-04-24T23:33:41.591665078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdz8r,Uid:0709c95b-fcee-45d7-b89b-c07c9daffd70,Namespace:calico-system,Attempt:0,}" Apr 24 23:33:41.970743 systemd-networkd[1246]: cali4f6971dc72b: Link UP Apr 24 23:33:41.973159 systemd-networkd[1246]: cali4f6971dc72b: Gained carrier Apr 24 23:33:41.995656 kubelet[2774]: I0424 23:33:41.994381 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sbqhn" podStartSLOduration=3.447804816 podStartE2EDuration="18.994352178s" podCreationTimestamp="2026-04-24 23:33:23 +0000 UTC" firstStartedPulling="2026-04-24 23:33:24.165190342 +0000 UTC m=+26.754967476" lastFinishedPulling="2026-04-24 23:33:39.711737704 +0000 UTC m=+42.301514838" observedRunningTime="2026-04-24 23:33:41.844003182 +0000 UTC m=+44.433780356" watchObservedRunningTime="2026-04-24 23:33:41.994352178 +0000 UTC m=+44.584129312" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.431 [ERROR][3685] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.543 [INFO][3685] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0 coredns-674b8bbfcf- kube-system fee72518-fd5a-4c26-81da-aea772642d55 838 0 2026-04-24 23:33:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 coredns-674b8bbfcf-w9lds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f6971dc72b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.551 [INFO][3685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.754 [INFO][3797] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" HandleID="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.779 [INFO][3797] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" HandleID="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000375460), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"coredns-674b8bbfcf-w9lds", "timestamp":"2026-04-24 23:33:41.75447421 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000535340)} Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.780 [INFO][3797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.780 [INFO][3797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.780 [INFO][3797] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.804 [INFO][3797] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.880 [INFO][3797] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.897 [INFO][3797] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.905 [INFO][3797] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.918 [INFO][3797] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.918 [INFO][3797] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.924 [INFO][3797] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2 Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.933 [INFO][3797] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.942 [INFO][3797] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.65/26] block=192.168.47.64/26 handle="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.942 [INFO][3797] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.65/26] handle="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.942 [INFO][3797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.001900 containerd[1609]: 2026-04-24 23:33:41.943 [INFO][3797] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.65/26] IPv6=[] ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" HandleID="k8s-pod-network.28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.949 [INFO][3685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fee72518-fd5a-4c26-81da-aea772642d55", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"coredns-674b8bbfcf-w9lds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f6971dc72b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.950 [INFO][3685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.65/32] ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.950 [INFO][3685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f6971dc72b ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.967 [INFO][3685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.974 [INFO][3685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fee72518-fd5a-4c26-81da-aea772642d55", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2", Pod:"coredns-674b8bbfcf-w9lds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f6971dc72b", MAC:"4a:2a:36:df:fe:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.004449 containerd[1609]: 2026-04-24 23:33:41.992 [INFO][3685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9lds" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--w9lds-eth0" Apr 24 23:33:42.042813 systemd-networkd[1246]: cali898661940a0: Link UP Apr 24 23:33:42.043467 systemd-networkd[1246]: cali898661940a0: Gained carrier Apr 24 23:33:42.050658 containerd[1609]: time="2026-04-24T23:33:42.042886431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.050658 containerd[1609]: time="2026-04-24T23:33:42.042945509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.050658 containerd[1609]: time="2026-04-24T23:33:42.042957108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.050658 containerd[1609]: time="2026-04-24T23:33:42.043331013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.393 [ERROR][3692] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.473 [INFO][3692] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0 coredns-674b8bbfcf- kube-system 7a6d9175-1251-46ea-98c6-feb36b7f3803 840 0 2026-04-24 23:33:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 coredns-674b8bbfcf-v5q9h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali898661940a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.473 [INFO][3692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.722 [INFO][3775] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" HandleID="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.806 [INFO][3775] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" HandleID="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"coredns-674b8bbfcf-v5q9h", "timestamp":"2026-04-24 23:33:41.722428209 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001849a0)} Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.806 [INFO][3775] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.943 [INFO][3775] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.943 [INFO][3775] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.948 [INFO][3775] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:41.966 [INFO][3775] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.000 [INFO][3775] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.004 [INFO][3775] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.010 [INFO][3775] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.011 [INFO][3775] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.013 [INFO][3775] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.021 [INFO][3775] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3775] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.66/26] block=192.168.47.64/26 handle="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3775] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.66/26] handle="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3775] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.075748 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3775] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.66/26] IPv6=[] ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" HandleID="k8s-pod-network.eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Workload="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.038 [INFO][3692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a6d9175-1251-46ea-98c6-feb36b7f3803", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"coredns-674b8bbfcf-v5q9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali898661940a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.039 [INFO][3692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.66/32] ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.039 [INFO][3692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali898661940a0 ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.042 [INFO][3692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.045 [INFO][3692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a6d9175-1251-46ea-98c6-feb36b7f3803", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d", Pod:"coredns-674b8bbfcf-v5q9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali898661940a0", MAC:"ee:92:8c:ea:2f:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.076359 containerd[1609]: 2026-04-24 23:33:42.065 [INFO][3692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5q9h" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-coredns--674b8bbfcf--v5q9h-eth0" Apr 24 23:33:42.148493 containerd[1609]: time="2026-04-24T23:33:42.140619568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.148493 containerd[1609]: time="2026-04-24T23:33:42.147883101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.148493 containerd[1609]: time="2026-04-24T23:33:42.148103091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.151332 containerd[1609]: time="2026-04-24T23:33:42.148925936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.159941 systemd-networkd[1246]: cali0cf98bb5a17: Link UP Apr 24 23:33:42.167165 systemd-networkd[1246]: cali0cf98bb5a17: Gained carrier Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.347 [ERROR][3715] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.463 [INFO][3715] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0 goldmane-5b85766d88- calico-system 3f715f72-d714-4da9-bb23-778599741117 844 0 2026-04-24 23:33:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 goldmane-5b85766d88-6nj4m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0cf98bb5a17 [] [] }} ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.463 [INFO][3715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.748 [INFO][3777] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" HandleID="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Workload="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.842 [INFO][3777] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" HandleID="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Workload="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000272160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"goldmane-5b85766d88-6nj4m", "timestamp":"2026-04-24 23:33:41.748230043 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f8b00)} Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:41.842 [INFO][3777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.033 [INFO][3777] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.053 [INFO][3777] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.070 [INFO][3777] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.099 [INFO][3777] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.106 [INFO][3777] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.110 [INFO][3777] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.111 [INFO][3777] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.115 [INFO][3777] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2 Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.125 [INFO][3777] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.139 [INFO][3777] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.67/26] block=192.168.47.64/26 handle="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.139 [INFO][3777] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.67/26] handle="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.139 [INFO][3777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.211137 containerd[1609]: 2026-04-24 23:33:42.139 [INFO][3777] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.67/26] IPv6=[] ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" HandleID="k8s-pod-network.9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Workload="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.145 [INFO][3715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3f715f72-d714-4da9-bb23-778599741117", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"goldmane-5b85766d88-6nj4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0cf98bb5a17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.145 [INFO][3715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.67/32] ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.145 [INFO][3715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cf98bb5a17 ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.168 [INFO][3715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.170 [INFO][3715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3f715f72-d714-4da9-bb23-778599741117", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2", Pod:"goldmane-5b85766d88-6nj4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0cf98bb5a17", MAC:"72:db:3c:47:84:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.211829 containerd[1609]: 2026-04-24 23:33:42.200 [INFO][3715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2" Namespace="calico-system" Pod="goldmane-5b85766d88-6nj4m" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-goldmane--5b85766d88--6nj4m-eth0" Apr 24 23:33:42.217320 containerd[1609]: time="2026-04-24T23:33:42.215551112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9lds,Uid:fee72518-fd5a-4c26-81da-aea772642d55,Namespace:kube-system,Attempt:0,} returns sandbox id \"28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2\"" Apr 24 23:33:42.233301 containerd[1609]: time="2026-04-24T23:33:42.232555871Z" level=info msg="CreateContainer within sandbox \"28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:33:42.258814 containerd[1609]: time="2026-04-24T23:33:42.257812841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.258814 containerd[1609]: time="2026-04-24T23:33:42.257906317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.258814 containerd[1609]: time="2026-04-24T23:33:42.257948915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.259383 containerd[1609]: time="2026-04-24T23:33:42.259295338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.282880 systemd-networkd[1246]: cali4a426af35b3: Link UP Apr 24 23:33:42.283780 systemd-networkd[1246]: cali4a426af35b3: Gained carrier Apr 24 23:33:42.303732 containerd[1609]: time="2026-04-24T23:33:42.302993805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5q9h,Uid:7a6d9175-1251-46ea-98c6-feb36b7f3803,Namespace:kube-system,Attempt:0,} returns sandbox id \"eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d\"" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.402 [ERROR][3712] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.468 [INFO][3712] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0 calico-apiserver-85db46756d- calico-system 9b8327f6-21ad-4747-ac73-b5c291efc011 843 0 2026-04-24 23:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85db46756d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 calico-apiserver-85db46756d-tx2k6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4a426af35b3 [] [] }} ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.468 [INFO][3712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.859 [INFO][3778] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" HandleID="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.882 [INFO][3778] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" HandleID="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003afe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"calico-apiserver-85db46756d-tx2k6", "timestamp":"2026-04-24 23:33:41.859055765 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400051b1e0)} Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:41.882 [INFO][3778] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.140 [INFO][3778] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.140 [INFO][3778] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.149 [INFO][3778] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.171 [INFO][3778] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.214 [INFO][3778] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.220 [INFO][3778] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.228 [INFO][3778] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.229 [INFO][3778] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.236 [INFO][3778] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.244 [INFO][3778] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.260 [INFO][3778] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.68/26] block=192.168.47.64/26 handle="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.260 [INFO][3778] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.68/26] handle="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.260 [INFO][3778] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.320623 containerd[1609]: 2026-04-24 23:33:42.261 [INFO][3778] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.68/26] IPv6=[] ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" HandleID="k8s-pod-network.018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.275 [INFO][3712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0", GenerateName:"calico-apiserver-85db46756d-", Namespace:"calico-system", SelfLink:"", UID:"9b8327f6-21ad-4747-ac73-b5c291efc011", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85db46756d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"calico-apiserver-85db46756d-tx2k6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a426af35b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.275 [INFO][3712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.68/32] ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.275 [INFO][3712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a426af35b3 ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.284 [INFO][3712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.288 [INFO][3712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0", GenerateName:"calico-apiserver-85db46756d-", Namespace:"calico-system", SelfLink:"", UID:"9b8327f6-21ad-4747-ac73-b5c291efc011", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85db46756d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea", Pod:"calico-apiserver-85db46756d-tx2k6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a426af35b3", MAC:"1a:76:02:34:17:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.321920 containerd[1609]: 2026-04-24 23:33:42.310 [INFO][3712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea" Namespace="calico-system" Pod="calico-apiserver-85db46756d-tx2k6" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--tx2k6-eth0" Apr 24 23:33:42.331187 containerd[1609]: time="2026-04-24T23:33:42.331133612Z" level=info msg="CreateContainer within sandbox \"28471e8c9c6b7f5c6d37fe20d21bb2f66752c0714117f2087b0d4935239051f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9454fa49a2d30b53a12febb86ae0adfc21489f7cf883c9df8cda8226951472a2\"" Apr 24 23:33:42.334363 containerd[1609]: time="2026-04-24T23:33:42.334290199Z" level=info msg="CreateContainer within sandbox \"eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:33:42.341290 containerd[1609]: time="2026-04-24T23:33:42.341060272Z" level=info msg="StartContainer for \"9454fa49a2d30b53a12febb86ae0adfc21489f7cf883c9df8cda8226951472a2\"" Apr 24 23:33:42.379199 containerd[1609]: time="2026-04-24T23:33:42.377834473Z" level=info msg="CreateContainer within sandbox \"eae997b0f87e8d628dac076a6e8c4ac5d25bd1a325606fc899de1f4a599b776d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cde8c6350ac0f645f39d57109685f67b920ef897967fe9af29f455949bddcfdc\"" Apr 24 23:33:42.384119 containerd[1609]: time="2026-04-24T23:33:42.382022935Z" level=info msg="StartContainer for \"cde8c6350ac0f645f39d57109685f67b920ef897967fe9af29f455949bddcfdc\"" Apr 24 23:33:42.393433 systemd-networkd[1246]: cali6dc8a2558db: Link UP Apr 24 23:33:42.394845 systemd-networkd[1246]: cali6dc8a2558db: Gained carrier Apr 24 23:33:42.441774 containerd[1609]: time="2026-04-24T23:33:42.440660130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.441774 containerd[1609]: time="2026-04-24T23:33:42.440767685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.441774 containerd[1609]: time="2026-04-24T23:33:42.440795444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.441774 containerd[1609]: time="2026-04-24T23:33:42.440917039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.379 [ERROR][3745] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.469 [INFO][3745] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0 calico-kube-controllers-699cff59bd- calico-system 851f778d-8cd6-4676-b0f1-43ba69e059c5 845 0 2026-04-24 23:33:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699cff59bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 calico-kube-controllers-699cff59bd-hvnsr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6dc8a2558db [] [] }} ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.469 [INFO][3745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.865 [INFO][3774] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" HandleID="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.889 [INFO][3774] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" HandleID="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"calico-kube-controllers-699cff59bd-hvnsr", "timestamp":"2026-04-24 23:33:41.865237135 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003fe580)} Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:41.889 [INFO][3774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.261 [INFO][3774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.266 [INFO][3774] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.270 [INFO][3774] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.288 [INFO][3774] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.315 [INFO][3774] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.321 [INFO][3774] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.327 [INFO][3774] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.327 [INFO][3774] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.330 [INFO][3774] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2 Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.340 [INFO][3774] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.355 [INFO][3774] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.69/26] block=192.168.47.64/26 handle="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.355 [INFO][3774] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.69/26] handle="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.357 [INFO][3774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.446399 containerd[1609]: 2026-04-24 23:33:42.357 [INFO][3774] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.69/26] IPv6=[] ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" HandleID="k8s-pod-network.b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.367 [INFO][3745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0", GenerateName:"calico-kube-controllers-699cff59bd-", Namespace:"calico-system", SelfLink:"", UID:"851f778d-8cd6-4676-b0f1-43ba69e059c5", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699cff59bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"calico-kube-controllers-699cff59bd-hvnsr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dc8a2558db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.368 [INFO][3745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.69/32] ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.370 [INFO][3745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dc8a2558db ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.394 [INFO][3745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.402 [INFO][3745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0", GenerateName:"calico-kube-controllers-699cff59bd-", Namespace:"calico-system", SelfLink:"", UID:"851f778d-8cd6-4676-b0f1-43ba69e059c5", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699cff59bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2", Pod:"calico-kube-controllers-699cff59bd-hvnsr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dc8a2558db", MAC:"aa:1e:39:32:f7:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.447050 containerd[1609]: 2026-04-24 23:33:42.430 [INFO][3745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2" Namespace="calico-system" Pod="calico-kube-controllers-699cff59bd-hvnsr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--kube--controllers--699cff59bd--hvnsr-eth0" Apr 24 23:33:42.513305 systemd-networkd[1246]: cali42cc91cad5e: Link UP Apr 24 23:33:42.517892 containerd[1609]: time="2026-04-24T23:33:42.517729143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6nj4m,Uid:3f715f72-d714-4da9-bb23-778599741117,Namespace:calico-system,Attempt:0,} returns sandbox id \"9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2\"" Apr 24 23:33:42.518968 systemd-networkd[1246]: cali42cc91cad5e: Gained carrier Apr 24 23:33:42.531316 containerd[1609]: time="2026-04-24T23:33:42.528940027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.509 [ERROR][3702] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.631 [INFO][3702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0 calico-apiserver-85db46756d- calico-system 8d475154-2c3b-4970-8d6a-d57a93475a21 841 0 2026-04-24 23:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85db46756d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 calico-apiserver-85db46756d-zpn2f eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali42cc91cad5e [] [] }} ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.639 [INFO][3702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.896 [INFO][3820] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" HandleID="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.924 [INFO][3820] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" HandleID="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000416010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"calico-apiserver-85db46756d-zpn2f", "timestamp":"2026-04-24 23:33:41.896204063 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e38c0)} Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:41.924 [INFO][3820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.355 [INFO][3820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.355 [INFO][3820] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.372 [INFO][3820] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.396 [INFO][3820] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.424 [INFO][3820] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.434 [INFO][3820] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.444 [INFO][3820] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.445 [INFO][3820] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.453 [INFO][3820] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.464 [INFO][3820] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3820] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.70/26] block=192.168.47.64/26 handle="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3820] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.70/26] handle="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.576591 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3820] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.70/26] IPv6=[] ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" HandleID="k8s-pod-network.5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Workload="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.496 [INFO][3702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0", GenerateName:"calico-apiserver-85db46756d-", Namespace:"calico-system", SelfLink:"", UID:"8d475154-2c3b-4970-8d6a-d57a93475a21", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85db46756d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"calico-apiserver-85db46756d-zpn2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42cc91cad5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.498 [INFO][3702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.70/32] ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.499 [INFO][3702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42cc91cad5e ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.522 [INFO][3702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.522 [INFO][3702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0", GenerateName:"calico-apiserver-85db46756d-", Namespace:"calico-system", SelfLink:"", UID:"8d475154-2c3b-4970-8d6a-d57a93475a21", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85db46756d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c", Pod:"calico-apiserver-85db46756d-zpn2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali42cc91cad5e", MAC:"32:6b:60:97:95:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.577329 containerd[1609]: 2026-04-24 23:33:42.551 [INFO][3702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c" Namespace="calico-system" Pod="calico-apiserver-85db46756d-zpn2f" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-calico--apiserver--85db46756d--zpn2f-eth0" Apr 24 23:33:42.642800 containerd[1609]: time="2026-04-24T23:33:42.642753803Z" level=info msg="StartContainer for \"9454fa49a2d30b53a12febb86ae0adfc21489f7cf883c9df8cda8226951472a2\" returns successfully" Apr 24 23:33:42.650498 containerd[1609]: time="2026-04-24T23:33:42.645211139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.650498 containerd[1609]: time="2026-04-24T23:33:42.645280576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.650498 containerd[1609]: time="2026-04-24T23:33:42.645301015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.650498 containerd[1609]: time="2026-04-24T23:33:42.645448008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.682331 systemd-networkd[1246]: cali075a3a97654: Link UP Apr 24 23:33:42.686092 systemd-networkd[1246]: cali075a3a97654: Gained carrier Apr 24 23:33:42.695276 containerd[1609]: time="2026-04-24T23:33:42.695093184Z" level=info msg="StartContainer for \"cde8c6350ac0f645f39d57109685f67b920ef897967fe9af29f455949bddcfdc\" returns successfully" Apr 24 23:33:42.788101 containerd[1609]: time="2026-04-24T23:33:42.784286923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:42.788101 containerd[1609]: time="2026-04-24T23:33:42.784352360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:42.788101 containerd[1609]: time="2026-04-24T23:33:42.784385199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.788101 containerd[1609]: time="2026-04-24T23:33:42.784488514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.517 [ERROR][3736] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.622 [INFO][3736] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0 whisker-7564959958- calico-system 0c4273ae-37c8-4148-b3e5-6cdb829a1a8e 856 0 2026-04-24 23:33:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7564959958 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 whisker-7564959958-dsnpn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali075a3a97654 [] [] }} ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.622 [INFO][3736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.901 [INFO][3818] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.926 [INFO][3818] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036c220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"whisker-7564959958-dsnpn", "timestamp":"2026-04-24 23:33:41.90176002 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003d0420)} Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:41.926 [INFO][3818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.484 [INFO][3818] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.488 [INFO][3818] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.521 [INFO][3818] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.551 [INFO][3818] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.556 [INFO][3818] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.564 [INFO][3818] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.564 [INFO][3818] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.575 [INFO][3818] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333 Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.589 [INFO][3818] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.613 [INFO][3818] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.71/26] block=192.168.47.64/26 handle="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.615 [INFO][3818] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.71/26] handle="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.616 [INFO][3818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:42.788101 containerd[1609]: 2026-04-24 23:33:42.616 [INFO][3818] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.71/26] IPv6=[] ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.647 [INFO][3736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0", GenerateName:"whisker-7564959958-", Namespace:"calico-system", SelfLink:"", UID:"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7564959958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"whisker-7564959958-dsnpn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075a3a97654", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.647 [INFO][3736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.71/32] ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.647 [INFO][3736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali075a3a97654 ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.702 [INFO][3736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.707 [INFO][3736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0", GenerateName:"whisker-7564959958-", Namespace:"calico-system", SelfLink:"", UID:"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7564959958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333", Pod:"whisker-7564959958-dsnpn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075a3a97654", MAC:"96:6c:d3:e6:ca:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:42.789837 containerd[1609]: 2026-04-24 23:33:42.735 [INFO][3736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Namespace="calico-system" Pod="whisker-7564959958-dsnpn" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:33:42.906692 kubelet[2774]: I0424 23:33:42.900538 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:33:42.918288 systemd[1]: run-containerd-runc-k8s.io-5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c-runc.gV97UZ.mount: Deactivated successfully. Apr 24 23:33:42.988788 kubelet[2774]: I0424 23:33:42.988278 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w9lds" podStartSLOduration=38.988252957 podStartE2EDuration="38.988252957s" podCreationTimestamp="2026-04-24 23:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:33:42.976507935 +0000 UTC m=+45.566285109" watchObservedRunningTime="2026-04-24 23:33:42.988252957 +0000 UTC m=+45.578030091" Apr 24 23:33:42.994841 systemd-networkd[1246]: califb968e9a698: Link UP Apr 24 23:33:42.995424 systemd-networkd[1246]: califb968e9a698: Gained carrier Apr 24 23:33:43.019450 containerd[1609]: time="2026-04-24T23:33:43.000518717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85db46756d-tx2k6,Uid:9b8327f6-21ad-4747-ac73-b5c291efc011,Namespace:calico-system,Attempt:0,} returns sandbox id \"018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea\"" Apr 24 23:33:43.073113 kubelet[2774]: I0424 23:33:43.059527 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v5q9h" podStartSLOduration=39.059506527 podStartE2EDuration="39.059506527s" podCreationTimestamp="2026-04-24 23:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:33:43.038600308 +0000 UTC m=+45.628377442" watchObservedRunningTime="2026-04-24 23:33:43.059506527 +0000 UTC m=+45.649283621" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.728 [ERROR][3804] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.867 [INFO][3804] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0 csi-node-driver- calico-system 0709c95b-fcee-45d7-b89b-c07c9daffd70 707 0 2026-04-24 23:33:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 csi-node-driver-zdz8r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb968e9a698 [] [] }} ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.867 [INFO][3804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.940 [INFO][3848] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" HandleID="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Workload="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.960 [INFO][3848] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" HandleID="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Workload="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebf10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"csi-node-driver-zdz8r", "timestamp":"2026-04-24 23:33:41.940316417 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b4dc0)} Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:41.960 [INFO][3848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.623 [INFO][3848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.624 [INFO][3848] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.639 [INFO][3848] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.677 [INFO][3848] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.706 [INFO][3848] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.722 [INFO][3848] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.761 [INFO][3848] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.764 [INFO][3848] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.779 [INFO][3848] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.820 [INFO][3848] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.878 [INFO][3848] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.72/26] block=192.168.47.64/26 handle="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.878 [INFO][3848] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.72/26] handle="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.878 [INFO][3848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:33:43.094349 containerd[1609]: 2026-04-24 23:33:42.878 [INFO][3848] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.72/26] IPv6=[] ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" HandleID="k8s-pod-network.3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Workload="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:42.910 [INFO][3804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0709c95b-fcee-45d7-b89b-c07c9daffd70", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"csi-node-driver-zdz8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb968e9a698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:42.910 [INFO][3804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.72/32] ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:42.910 [INFO][3804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb968e9a698 ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:43.015 [INFO][3804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:43.027 [INFO][3804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0709c95b-fcee-45d7-b89b-c07c9daffd70", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d", Pod:"csi-node-driver-zdz8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb968e9a698", MAC:"ca:ee:27:79:9b:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:33:43.098455 containerd[1609]: 2026-04-24 23:33:43.061 [INFO][3804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d" Namespace="calico-system" Pod="csi-node-driver-zdz8r" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-csi--node--driver--zdz8r-eth0" Apr 24 23:33:43.137113 containerd[1609]: time="2026-04-24T23:33:43.136129573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:43.137113 containerd[1609]: time="2026-04-24T23:33:43.136365803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:43.140368 containerd[1609]: time="2026-04-24T23:33:43.139520913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:43.142758 containerd[1609]: time="2026-04-24T23:33:43.140336200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699cff59bd-hvnsr,Uid:851f778d-8cd6-4676-b0f1-43ba69e059c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2\"" Apr 24 23:33:43.143536 containerd[1609]: time="2026-04-24T23:33:43.140201725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:43.302698 containerd[1609]: time="2026-04-24T23:33:43.294753083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:33:43.302698 containerd[1609]: time="2026-04-24T23:33:43.300922589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:33:43.302698 containerd[1609]: time="2026-04-24T23:33:43.300942149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:43.302698 containerd[1609]: time="2026-04-24T23:33:43.301062344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:33:43.390013 systemd-networkd[1246]: cali898661940a0: Gained IPv6LL Apr 24 23:33:43.407837 containerd[1609]: time="2026-04-24T23:33:43.405984745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85db46756d-zpn2f,Uid:8d475154-2c3b-4970-8d6a-d57a93475a21,Namespace:calico-system,Attempt:0,} returns sandbox id \"5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c\"" Apr 24 23:33:43.437420 containerd[1609]: time="2026-04-24T23:33:43.437323695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7564959958-dsnpn,Uid:0c4273ae-37c8-4148-b3e5-6cdb829a1a8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\"" Apr 24 23:33:43.440037 containerd[1609]: time="2026-04-24T23:33:43.439973945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdz8r,Uid:0709c95b-fcee-45d7-b89b-c07c9daffd70,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d\"" Apr 24 23:33:43.582827 systemd-networkd[1246]: cali6dc8a2558db: Gained IPv6LL Apr 24 23:33:43.701700 kernel: calico-node[4232]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:33:43.708950 systemd-networkd[1246]: cali42cc91cad5e: Gained IPv6LL Apr 24 23:33:43.773969 systemd-networkd[1246]: cali4a426af35b3: Gained IPv6LL Apr 24 23:33:43.836981 systemd-networkd[1246]: cali4f6971dc72b: Gained IPv6LL Apr 24 23:33:44.028921 systemd-networkd[1246]: cali075a3a97654: Gained IPv6LL Apr 24 23:33:44.194655 systemd-networkd[1246]: vxlan.calico: Link UP Apr 24 23:33:44.194668 systemd-networkd[1246]: vxlan.calico: Gained carrier Apr 24 23:33:44.221932 systemd-networkd[1246]: cali0cf98bb5a17: Gained IPv6LL Apr 24 23:33:44.291494 systemd-networkd[1246]: califb968e9a698: Gained IPv6LL Apr 24 23:33:44.925129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564618281.mount: Deactivated successfully. Apr 24 23:33:45.321413 containerd[1609]: time="2026-04-24T23:33:45.321270918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:45.323488 containerd[1609]: time="2026-04-24T23:33:45.323129206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 24 23:33:45.325069 containerd[1609]: time="2026-04-24T23:33:45.324664506Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:45.328119 containerd[1609]: time="2026-04-24T23:33:45.328075374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:45.328931 containerd[1609]: time="2026-04-24T23:33:45.328869303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.795639457s" Apr 24 23:33:45.328931 containerd[1609]: time="2026-04-24T23:33:45.328908941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 24 23:33:45.334851 containerd[1609]: time="2026-04-24T23:33:45.334797113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:33:45.339808 containerd[1609]: time="2026-04-24T23:33:45.339571207Z" level=info msg="CreateContainer within sandbox \"9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:33:45.368166 containerd[1609]: time="2026-04-24T23:33:45.368009783Z" level=info msg="CreateContainer within sandbox \"9113b21b5e21f4c9afb38ec70d7c6709078b4ed17ebe382b87a428c3bf5d4cf2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7f8db01e9982c9639421002aac68a5b6feaeae7561f3f3c72534aad039c83ea1\"" Apr 24 23:33:45.369471 containerd[1609]: time="2026-04-24T23:33:45.369070942Z" level=info msg="StartContainer for \"7f8db01e9982c9639421002aac68a5b6feaeae7561f3f3c72534aad039c83ea1\"" Apr 24 23:33:45.373814 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Apr 24 23:33:45.459790 containerd[1609]: time="2026-04-24T23:33:45.459736262Z" level=info msg="StartContainer for \"7f8db01e9982c9639421002aac68a5b6feaeae7561f3f3c72534aad039c83ea1\" returns successfully" Apr 24 23:33:45.973701 kubelet[2774]: I0424 23:33:45.968882 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-6nj4m" podStartSLOduration=21.15923817 podStartE2EDuration="23.968862774s" podCreationTimestamp="2026-04-24 23:33:22 +0000 UTC" firstStartedPulling="2026-04-24 23:33:42.52487192 +0000 UTC m=+45.114649054" lastFinishedPulling="2026-04-24 23:33:45.334496484 +0000 UTC m=+47.924273658" observedRunningTime="2026-04-24 23:33:45.968278797 +0000 UTC m=+48.558055931" watchObservedRunningTime="2026-04-24 23:33:45.968862774 +0000 UTC m=+48.558639908" Apr 24 23:33:49.693816 kubelet[2774]: I0424 23:33:49.693424 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:33:49.865612 systemd[1]: run-containerd-runc-k8s.io-56715b9952f762cddf65b1503590b917e9dd9d12f12e8dddca8fb68c68f48c30-runc.vT0Jwv.mount: Deactivated successfully. Apr 24 23:33:53.762061 containerd[1609]: time="2026-04-24T23:33:53.760748097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:53.763025 containerd[1609]: time="2026-04-24T23:33:53.762977548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 24 23:33:53.764293 containerd[1609]: time="2026-04-24T23:33:53.764238269Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:53.768632 containerd[1609]: time="2026-04-24T23:33:53.767662564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:53.768885 containerd[1609]: time="2026-04-24T23:33:53.768855047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 8.434009576s" Apr 24 23:33:53.768982 containerd[1609]: time="2026-04-24T23:33:53.768967043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:33:53.777200 containerd[1609]: time="2026-04-24T23:33:53.776911439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:33:53.785832 containerd[1609]: time="2026-04-24T23:33:53.785668489Z" level=info msg="CreateContainer within sandbox \"018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:33:53.804903 containerd[1609]: time="2026-04-24T23:33:53.804861417Z" level=info msg="CreateContainer within sandbox \"018ae72df76c85bd2bd31ed9b30e4020dd754c66290aa8b34c164f467bf738ea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0972b6040343953a196f8e00d39860dd6fc56f18b54f6239c6fde4b6d7fdbac0\"" Apr 24 23:33:53.806049 containerd[1609]: time="2026-04-24T23:33:53.806016901Z" level=info msg="StartContainer for \"0972b6040343953a196f8e00d39860dd6fc56f18b54f6239c6fde4b6d7fdbac0\"" Apr 24 23:33:53.852172 systemd[1]: run-containerd-runc-k8s.io-0972b6040343953a196f8e00d39860dd6fc56f18b54f6239c6fde4b6d7fdbac0-runc.E6hb9S.mount: Deactivated successfully. Apr 24 23:33:53.893515 containerd[1609]: time="2026-04-24T23:33:53.893469526Z" level=info msg="StartContainer for \"0972b6040343953a196f8e00d39860dd6fc56f18b54f6239c6fde4b6d7fdbac0\" returns successfully" Apr 24 23:33:56.384188 kubelet[2774]: I0424 23:33:56.383685 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-85db46756d-tx2k6" podStartSLOduration=24.635392069 podStartE2EDuration="35.383655772s" podCreationTimestamp="2026-04-24 23:33:21 +0000 UTC" firstStartedPulling="2026-04-24 23:33:43.026938348 +0000 UTC m=+45.616715482" lastFinishedPulling="2026-04-24 23:33:53.775202011 +0000 UTC m=+56.364979185" observedRunningTime="2026-04-24 23:33:53.990562133 +0000 UTC m=+56.580339267" watchObservedRunningTime="2026-04-24 23:33:56.383655772 +0000 UTC m=+58.973432906" Apr 24 23:33:57.987865 containerd[1609]: time="2026-04-24T23:33:57.987789251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:57.990101 containerd[1609]: time="2026-04-24T23:33:57.989796916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 24 23:33:57.991719 containerd[1609]: time="2026-04-24T23:33:57.991582267Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:57.996715 containerd[1609]: time="2026-04-24T23:33:57.996394614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:57.998022 containerd[1609]: time="2026-04-24T23:33:57.997970931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.221004054s" Apr 24 23:33:57.998022 containerd[1609]: time="2026-04-24T23:33:57.998017729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 24 23:33:58.000913 containerd[1609]: time="2026-04-24T23:33:58.000215869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:33:58.023952 containerd[1609]: time="2026-04-24T23:33:58.023713240Z" level=info msg="CreateContainer within sandbox \"b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:33:58.053911 containerd[1609]: time="2026-04-24T23:33:58.053808234Z" level=info msg="CreateContainer within sandbox \"b3c2f0eff3523adc05f238328cd642b49947e27876a41e6fae4195dd3b58dbc2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e4cab9c7c9daa9fceebabd065f31256389e3fd24608a45cb90109834fdc78da\"" Apr 24 23:33:58.054820 containerd[1609]: time="2026-04-24T23:33:58.054775808Z" level=info msg="StartContainer for \"7e4cab9c7c9daa9fceebabd065f31256389e3fd24608a45cb90109834fdc78da\"" Apr 24 23:33:58.163390 containerd[1609]: time="2026-04-24T23:33:58.162330690Z" level=info msg="StartContainer for \"7e4cab9c7c9daa9fceebabd065f31256389e3fd24608a45cb90109834fdc78da\" returns successfully" Apr 24 23:33:58.416776 containerd[1609]: time="2026-04-24T23:33:58.415738427Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:33:58.417907 containerd[1609]: time="2026-04-24T23:33:58.417863770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:33:58.425327 containerd[1609]: time="2026-04-24T23:33:58.425175095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 424.901547ms" Apr 24 23:33:58.425327 containerd[1609]: time="2026-04-24T23:33:58.425295091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:33:58.432109 containerd[1609]: time="2026-04-24T23:33:58.431180334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:33:58.440704 containerd[1609]: time="2026-04-24T23:33:58.440644521Z" level=info msg="CreateContainer within sandbox \"5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:33:58.470036 containerd[1609]: time="2026-04-24T23:33:58.468563493Z" level=info msg="CreateContainer within sandbox \"5506e30a654b0527da0181077b2c848b0fa3e69b0c283a645e425c1c03b36c8c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32ed478e649779d3b1de8aef97e3e0476fd6864ad7e7de6aad11b02b5bdd6233\"" Apr 24 23:33:58.472072 containerd[1609]: time="2026-04-24T23:33:58.471941923Z" level=info msg="StartContainer for \"32ed478e649779d3b1de8aef97e3e0476fd6864ad7e7de6aad11b02b5bdd6233\"" Apr 24 23:33:58.595089 containerd[1609]: time="2026-04-24T23:33:58.595039188Z" level=info msg="StartContainer for \"32ed478e649779d3b1de8aef97e3e0476fd6864ad7e7de6aad11b02b5bdd6233\" returns successfully" Apr 24 23:33:59.067288 kubelet[2774]: I0424 23:33:59.063204 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699cff59bd-hvnsr" podStartSLOduration=21.221597131 podStartE2EDuration="36.063183584s" podCreationTimestamp="2026-04-24 23:33:23 +0000 UTC" firstStartedPulling="2026-04-24 23:33:43.157964354 +0000 UTC m=+45.747741448" lastFinishedPulling="2026-04-24 23:33:57.999550767 +0000 UTC m=+60.589327901" observedRunningTime="2026-04-24 23:33:59.034031103 +0000 UTC m=+61.623808237" watchObservedRunningTime="2026-04-24 23:33:59.063183584 +0000 UTC m=+61.652960718" Apr 24 23:33:59.131724 kubelet[2774]: I0424 23:33:59.129783 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-85db46756d-zpn2f" podStartSLOduration=23.110514129 podStartE2EDuration="38.129762531s" podCreationTimestamp="2026-04-24 23:33:21 +0000 UTC" firstStartedPulling="2026-04-24 23:33:43.409707351 +0000 UTC m=+45.999484485" lastFinishedPulling="2026-04-24 23:33:58.428955753 +0000 UTC m=+61.018732887" observedRunningTime="2026-04-24 23:33:59.063407659 +0000 UTC m=+61.653184793" watchObservedRunningTime="2026-04-24 23:33:59.129762531 +0000 UTC m=+61.719539665" Apr 24 23:34:00.021700 kubelet[2774]: I0424 23:34:00.019442 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:34:00.051719 containerd[1609]: time="2026-04-24T23:34:00.050161971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:00.052339 containerd[1609]: time="2026-04-24T23:34:00.051654494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 24 23:34:00.052548 containerd[1609]: time="2026-04-24T23:34:00.052516072Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:00.055649 containerd[1609]: time="2026-04-24T23:34:00.055594114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:00.057250 containerd[1609]: time="2026-04-24T23:34:00.057202993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.62597002s" Apr 24 23:34:00.057443 containerd[1609]: time="2026-04-24T23:34:00.057417028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 24 23:34:00.059011 containerd[1609]: time="2026-04-24T23:34:00.058965068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:34:00.066507 containerd[1609]: time="2026-04-24T23:34:00.066454199Z" level=info msg="CreateContainer within sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:34:00.086695 containerd[1609]: time="2026-04-24T23:34:00.085585715Z" level=info msg="CreateContainer within sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\"" Apr 24 23:34:00.091586 containerd[1609]: time="2026-04-24T23:34:00.091534484Z" level=info msg="StartContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\"" Apr 24 23:34:00.160080 systemd[1]: run-containerd-runc-k8s.io-a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9-runc.CWaiHn.mount: Deactivated successfully. Apr 24 23:34:00.224468 containerd[1609]: time="2026-04-24T23:34:00.224049530Z" level=info msg="StartContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" returns successfully" Apr 24 23:34:01.657918 containerd[1609]: time="2026-04-24T23:34:01.657822932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:01.659607 containerd[1609]: time="2026-04-24T23:34:01.659551729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 24 23:34:01.660701 containerd[1609]: time="2026-04-24T23:34:01.660610503Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:01.663424 containerd[1609]: time="2026-04-24T23:34:01.663342076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:01.664189 containerd[1609]: time="2026-04-24T23:34:01.664148016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.605141708s" Apr 24 23:34:01.664189 containerd[1609]: time="2026-04-24T23:34:01.664187055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 24 23:34:01.665951 containerd[1609]: time="2026-04-24T23:34:01.665910213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:34:01.672622 containerd[1609]: time="2026-04-24T23:34:01.672575689Z" level=info msg="CreateContainer within sandbox \"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:34:01.697709 containerd[1609]: time="2026-04-24T23:34:01.696538819Z" level=info msg="CreateContainer within sandbox \"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"06da12f89312d3ff8786634c4e1f0f3d62d305017a6524cf80148a07cb3bf7cb\"" Apr 24 23:34:01.699711 containerd[1609]: time="2026-04-24T23:34:01.699049197Z" level=info msg="StartContainer for \"06da12f89312d3ff8786634c4e1f0f3d62d305017a6524cf80148a07cb3bf7cb\"" Apr 24 23:34:01.778286 containerd[1609]: time="2026-04-24T23:34:01.778098731Z" level=info msg="StartContainer for \"06da12f89312d3ff8786634c4e1f0f3d62d305017a6524cf80148a07cb3bf7cb\" returns successfully" Apr 24 23:34:03.445877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92924361.mount: Deactivated successfully. Apr 24 23:34:03.471994 containerd[1609]: time="2026-04-24T23:34:03.471923401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:03.474538 containerd[1609]: time="2026-04-24T23:34:03.474487021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 24 23:34:03.475985 containerd[1609]: time="2026-04-24T23:34:03.475925348Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:03.480026 containerd[1609]: time="2026-04-24T23:34:03.479951974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:03.481324 containerd[1609]: time="2026-04-24T23:34:03.480965070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.815010898s" Apr 24 23:34:03.481324 containerd[1609]: time="2026-04-24T23:34:03.481011349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 24 23:34:03.483321 containerd[1609]: time="2026-04-24T23:34:03.483040702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:34:03.486976 containerd[1609]: time="2026-04-24T23:34:03.486820214Z" level=info msg="CreateContainer within sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:34:03.502685 containerd[1609]: time="2026-04-24T23:34:03.502622686Z" level=info msg="CreateContainer within sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\"" Apr 24 23:34:03.511367 containerd[1609]: time="2026-04-24T23:34:03.511301443Z" level=info msg="StartContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\"" Apr 24 23:34:03.600607 containerd[1609]: time="2026-04-24T23:34:03.600457166Z" level=info msg="StartContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" returns successfully" Apr 24 23:34:04.038001 containerd[1609]: time="2026-04-24T23:34:04.037775360Z" level=info msg="StopContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" with timeout 30 (s)" Apr 24 23:34:04.038001 containerd[1609]: time="2026-04-24T23:34:04.037850478Z" level=info msg="StopContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" with timeout 30 (s)" Apr 24 23:34:04.038532 containerd[1609]: time="2026-04-24T23:34:04.038416345Z" level=info msg="Stop container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" with signal terminated" Apr 24 23:34:04.039155 containerd[1609]: time="2026-04-24T23:34:04.039108730Z" level=info msg="Stop container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" with signal terminated" Apr 24 23:34:04.060842 kubelet[2774]: I0424 23:34:04.060107 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7564959958-dsnpn" podStartSLOduration=18.02076728 podStartE2EDuration="38.060088454s" podCreationTimestamp="2026-04-24 23:33:26 +0000 UTC" firstStartedPulling="2026-04-24 23:33:43.442812709 +0000 UTC m=+46.032589803" lastFinishedPulling="2026-04-24 23:34:03.482133843 +0000 UTC m=+66.071910977" observedRunningTime="2026-04-24 23:34:04.060045975 +0000 UTC m=+66.649823109" watchObservedRunningTime="2026-04-24 23:34:04.060088454 +0000 UTC m=+66.649865588" Apr 24 23:34:04.108093 containerd[1609]: time="2026-04-24T23:34:04.107557898Z" level=info msg="shim disconnected" id=a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9 namespace=k8s.io Apr 24 23:34:04.108093 containerd[1609]: time="2026-04-24T23:34:04.107913330Z" level=warning msg="cleaning up after shim disconnected" id=a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9 namespace=k8s.io Apr 24 23:34:04.108093 containerd[1609]: time="2026-04-24T23:34:04.107934209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:34:04.227793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082-rootfs.mount: Deactivated successfully. Apr 24 23:34:04.227965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9-rootfs.mount: Deactivated successfully. Apr 24 23:34:04.241747 containerd[1609]: time="2026-04-24T23:34:04.240942314Z" level=info msg="shim disconnected" id=fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082 namespace=k8s.io Apr 24 23:34:04.241747 containerd[1609]: time="2026-04-24T23:34:04.241031192Z" level=warning msg="cleaning up after shim disconnected" id=fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082 namespace=k8s.io Apr 24 23:34:04.241747 containerd[1609]: time="2026-04-24T23:34:04.241055071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:34:04.253905 containerd[1609]: time="2026-04-24T23:34:04.253026320Z" level=info msg="StopContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" returns successfully" Apr 24 23:34:04.299775 containerd[1609]: time="2026-04-24T23:34:04.297348835Z" level=info msg="StopContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" returns successfully" Apr 24 23:34:04.326876 containerd[1609]: time="2026-04-24T23:34:04.326799847Z" level=info msg="StopPodSandbox for \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\"" Apr 24 23:34:04.327034 containerd[1609]: time="2026-04-24T23:34:04.326889885Z" level=info msg="Container to stop \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:34:04.327034 containerd[1609]: time="2026-04-24T23:34:04.326903605Z" level=info msg="Container to stop \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:34:04.334885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333-shm.mount: Deactivated successfully. Apr 24 23:34:04.374444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333-rootfs.mount: Deactivated successfully. Apr 24 23:34:04.380269 containerd[1609]: time="2026-04-24T23:34:04.379954042Z" level=info msg="shim disconnected" id=14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333 namespace=k8s.io Apr 24 23:34:04.380269 containerd[1609]: time="2026-04-24T23:34:04.380024361Z" level=warning msg="cleaning up after shim disconnected" id=14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333 namespace=k8s.io Apr 24 23:34:04.380269 containerd[1609]: time="2026-04-24T23:34:04.380036361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:34:04.480025 systemd-networkd[1246]: cali075a3a97654: Link DOWN Apr 24 23:34:04.480037 systemd-networkd[1246]: cali075a3a97654: Lost carrier Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.475 [INFO][5080] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.476 [INFO][5080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" iface="eth0" netns="/var/run/netns/cni-acaa48f3-cf56-3590-b318-6c6415224b9c" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.478 [INFO][5080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" iface="eth0" netns="/var/run/netns/cni-acaa48f3-cf56-3590-b318-6c6415224b9c" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.491 [INFO][5080] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" after=13.379217ms iface="eth0" netns="/var/run/netns/cni-acaa48f3-cf56-3590-b318-6c6415224b9c" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.494 [INFO][5080] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.495 [INFO][5080] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.531 [INFO][5094] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.531 [INFO][5094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.531 [INFO][5094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.583 [INFO][5094] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.583 [INFO][5094] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.585 [INFO][5094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:34:04.589412 containerd[1609]: 2026-04-24 23:34:04.587 [INFO][5080] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:04.592009 containerd[1609]: time="2026-04-24T23:34:04.591835119Z" level=info msg="TearDown network for sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" successfully" Apr 24 23:34:04.592009 containerd[1609]: time="2026-04-24T23:34:04.591878318Z" level=info msg="StopPodSandbox for \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" returns successfully" Apr 24 23:34:04.597411 systemd[1]: run-netns-cni\x2dacaa48f3\x2dcf56\x2d3590\x2db318\x2d6c6415224b9c.mount: Deactivated successfully. Apr 24 23:34:04.721601 kubelet[2774]: I0424 23:34:04.721254 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwhz2\" (UniqueName: \"kubernetes.io/projected/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-kube-api-access-gwhz2\") pod \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " Apr 24 23:34:04.721601 kubelet[2774]: I0424 23:34:04.721347 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-backend-key-pair\") pod \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " Apr 24 23:34:04.721601 kubelet[2774]: I0424 23:34:04.721393 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-ca-bundle\") pod \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " Apr 24 23:34:04.721601 kubelet[2774]: I0424 23:34:04.721433 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-nginx-config\") pod \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\" (UID: \"0c4273ae-37c8-4148-b3e5-6cdb829a1a8e\") " Apr 24 23:34:04.739016 systemd[1]: var-lib-kubelet-pods-0c4273ae\x2d37c8\x2d4148\x2db3e5\x2d6cdb829a1a8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgwhz2.mount: Deactivated successfully. Apr 24 23:34:04.740616 kubelet[2774]: I0424 23:34:04.739107 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-kube-api-access-gwhz2" (OuterVolumeSpecName: "kube-api-access-gwhz2") pod "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e" (UID: "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e"). InnerVolumeSpecName "kube-api-access-gwhz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:34:04.740616 kubelet[2774]: I0424 23:34:04.739121 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e" (UID: "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:34:04.740616 kubelet[2774]: I0424 23:34:04.739461 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e" (UID: "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:34:04.742807 kubelet[2774]: I0424 23:34:04.742747 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e" (UID: "0c4273ae-37c8-4148-b3e5-6cdb829a1a8e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:34:04.822006 kubelet[2774]: I0424 23:34:04.821942 2774 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-nginx-config\") on node \"ci-4081-3-6-n-4ca6954963\" DevicePath \"\"" Apr 24 23:34:04.822006 kubelet[2774]: I0424 23:34:04.821998 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwhz2\" (UniqueName: \"kubernetes.io/projected/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-kube-api-access-gwhz2\") on node \"ci-4081-3-6-n-4ca6954963\" DevicePath \"\"" Apr 24 23:34:04.822006 kubelet[2774]: I0424 23:34:04.822018 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-4ca6954963\" DevicePath \"\"" Apr 24 23:34:04.822287 kubelet[2774]: I0424 23:34:04.822035 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e-whisker-ca-bundle\") on node \"ci-4081-3-6-n-4ca6954963\" DevicePath \"\"" Apr 24 23:34:05.048775 kubelet[2774]: I0424 23:34:05.048604 2774 scope.go:117] "RemoveContainer" containerID="fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082" Apr 24 23:34:05.054822 containerd[1609]: time="2026-04-24T23:34:05.054372386Z" level=info msg="RemoveContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\"" Apr 24 23:34:05.086253 containerd[1609]: time="2026-04-24T23:34:05.086178404Z" level=info msg="RemoveContainer for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" returns successfully" Apr 24 23:34:05.087176 kubelet[2774]: I0424 23:34:05.087122 2774 scope.go:117] "RemoveContainer" containerID="a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9" Apr 24 23:34:05.094133 containerd[1609]: time="2026-04-24T23:34:05.092496064Z" level=info msg="RemoveContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\"" Apr 24 23:34:05.098626 containerd[1609]: time="2026-04-24T23:34:05.097766468Z" level=info msg="RemoveContainer for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" returns successfully" Apr 24 23:34:05.101726 kubelet[2774]: I0424 23:34:05.099986 2774 scope.go:117] "RemoveContainer" containerID="fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082" Apr 24 23:34:05.102474 containerd[1609]: time="2026-04-24T23:34:05.102306448Z" level=error msg="ContainerStatus for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": not found" Apr 24 23:34:05.102948 kubelet[2774]: E0424 23:34:05.102926 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": not found" containerID="fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082" Apr 24 23:34:05.103157 kubelet[2774]: I0424 23:34:05.103087 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082"} err="failed to get container status \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": not found" Apr 24 23:34:05.103250 kubelet[2774]: I0424 23:34:05.103239 2774 scope.go:117] "RemoveContainer" containerID="a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9" Apr 24 23:34:05.103797 containerd[1609]: time="2026-04-24T23:34:05.103757776Z" level=error msg="ContainerStatus for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": not found" Apr 24 23:34:05.104078 kubelet[2774]: E0424 23:34:05.103954 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": not found" containerID="a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9" Apr 24 23:34:05.104078 kubelet[2774]: I0424 23:34:05.103987 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9"} err="failed to get container status \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": not found" Apr 24 23:34:05.104078 kubelet[2774]: I0424 23:34:05.104005 2774 scope.go:117] "RemoveContainer" containerID="fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082" Apr 24 23:34:05.104597 containerd[1609]: time="2026-04-24T23:34:05.104304764Z" level=error msg="ContainerStatus for \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": not found" Apr 24 23:34:05.104748 kubelet[2774]: I0424 23:34:05.104426 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082"} err="failed to get container status \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa77ca10ed7882a62e95079fc33fb7f690b4b0c9c707eab83bcf0413d15c0082\": not found" Apr 24 23:34:05.104748 kubelet[2774]: I0424 23:34:05.104444 2774 scope.go:117] "RemoveContainer" containerID="a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9" Apr 24 23:34:05.105764 containerd[1609]: time="2026-04-24T23:34:05.104983549Z" level=error msg="ContainerStatus for \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": not found" Apr 24 23:34:05.106322 kubelet[2774]: I0424 23:34:05.105880 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9"} err="failed to get container status \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2d8c261eb16ee24fdc1fc46aa5431fc137f9557589bd378fc6d5bfe174ba9a9\": not found" Apr 24 23:34:05.226817 kubelet[2774]: I0424 23:34:05.223505 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47a93b50-1bfc-4bc7-803c-03933f5685dd-nginx-config\") pod \"whisker-86ddc8df4c-kvqlr\" (UID: \"47a93b50-1bfc-4bc7-803c-03933f5685dd\") " pod="calico-system/whisker-86ddc8df4c-kvqlr" Apr 24 23:34:05.226817 kubelet[2774]: I0424 23:34:05.223583 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwk2t\" (UniqueName: \"kubernetes.io/projected/47a93b50-1bfc-4bc7-803c-03933f5685dd-kube-api-access-jwk2t\") pod \"whisker-86ddc8df4c-kvqlr\" (UID: \"47a93b50-1bfc-4bc7-803c-03933f5685dd\") " pod="calico-system/whisker-86ddc8df4c-kvqlr" Apr 24 23:34:05.226817 kubelet[2774]: I0424 23:34:05.223612 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47a93b50-1bfc-4bc7-803c-03933f5685dd-whisker-backend-key-pair\") pod \"whisker-86ddc8df4c-kvqlr\" (UID: \"47a93b50-1bfc-4bc7-803c-03933f5685dd\") " pod="calico-system/whisker-86ddc8df4c-kvqlr" Apr 24 23:34:05.226817 kubelet[2774]: I0424 23:34:05.223635 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47a93b50-1bfc-4bc7-803c-03933f5685dd-whisker-ca-bundle\") pod \"whisker-86ddc8df4c-kvqlr\" (UID: \"47a93b50-1bfc-4bc7-803c-03933f5685dd\") " pod="calico-system/whisker-86ddc8df4c-kvqlr" Apr 24 23:34:05.231651 systemd[1]: var-lib-kubelet-pods-0c4273ae\x2d37c8\x2d4148\x2db3e5\x2d6cdb829a1a8e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:34:05.451776 containerd[1609]: time="2026-04-24T23:34:05.451668981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86ddc8df4c-kvqlr,Uid:47a93b50-1bfc-4bc7-803c-03933f5685dd,Namespace:calico-system,Attempt:0,}" Apr 24 23:34:05.561991 containerd[1609]: time="2026-04-24T23:34:05.561770632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:05.563568 containerd[1609]: time="2026-04-24T23:34:05.563339037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 24 23:34:05.565258 containerd[1609]: time="2026-04-24T23:34:05.565000081Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:05.567172 kubelet[2774]: I0424 23:34:05.567112 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4273ae-37c8-4148-b3e5-6cdb829a1a8e" path="/var/lib/kubelet/pods/0c4273ae-37c8-4148-b3e5-6cdb829a1a8e/volumes" Apr 24 23:34:05.575228 containerd[1609]: time="2026-04-24T23:34:05.575155577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:34:05.577463 containerd[1609]: time="2026-04-24T23:34:05.577402447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.094315746s" Apr 24 23:34:05.577693 containerd[1609]: time="2026-04-24T23:34:05.577656001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 24 23:34:05.598022 containerd[1609]: time="2026-04-24T23:34:05.597961913Z" level=info msg="CreateContainer within sandbox \"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:34:05.621789 containerd[1609]: time="2026-04-24T23:34:05.621668230Z" level=info msg="CreateContainer within sandbox \"3a9a2c10f28f4ed6f3a9d158dbedd87768489e0c951838f2379bf4627d5a857d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fa4547fbe61cb7905319697823c405b1e4ed9228ac51dcb9c9338ea13a6c9982\"" Apr 24 23:34:05.624532 containerd[1609]: time="2026-04-24T23:34:05.624195375Z" level=info msg="StartContainer for \"fa4547fbe61cb7905319697823c405b1e4ed9228ac51dcb9c9338ea13a6c9982\"" Apr 24 23:34:05.692017 systemd-networkd[1246]: cali62cf69b0f67: Link UP Apr 24 23:34:05.693951 systemd-networkd[1246]: cali62cf69b0f67: Gained carrier Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.533 [INFO][5137] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0 whisker-86ddc8df4c- calico-system 47a93b50-1bfc-4bc7-803c-03933f5685dd 1074 0 2026-04-24 23:34:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86ddc8df4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-4ca6954963 whisker-86ddc8df4c-kvqlr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali62cf69b0f67 [] [] }} ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.533 [INFO][5137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.587 [INFO][5149] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" HandleID="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.604 [INFO][5149] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" HandleID="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4ca6954963", "pod":"whisker-86ddc8df4c-kvqlr", "timestamp":"2026-04-24 23:34:05.587363067 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4ca6954963", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a8580)} Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.604 [INFO][5149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.604 [INFO][5149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.604 [INFO][5149] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4ca6954963' Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.608 [INFO][5149] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.617 [INFO][5149] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.633 [INFO][5149] ipam/ipam.go 526: Trying affinity for 192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.638 [INFO][5149] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.645 [INFO][5149] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.645 [INFO][5149] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.651 [INFO][5149] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439 Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.661 [INFO][5149] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.677 [INFO][5149] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.73/26] block=192.168.47.64/26 handle="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.678 [INFO][5149] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.73/26] handle="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" host="ci-4081-3-6-n-4ca6954963" Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.678 [INFO][5149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:34:05.720397 containerd[1609]: 2026-04-24 23:34:05.678 [INFO][5149] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.73/26] IPv6=[] ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" HandleID="k8s-pod-network.e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.684 [INFO][5137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0", GenerateName:"whisker-86ddc8df4c-", Namespace:"calico-system", SelfLink:"", UID:"47a93b50-1bfc-4bc7-803c-03933f5685dd", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86ddc8df4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"", Pod:"whisker-86ddc8df4c-kvqlr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62cf69b0f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.684 [INFO][5137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.73/32] ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.684 [INFO][5137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62cf69b0f67 ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.695 [INFO][5137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.696 [INFO][5137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0", GenerateName:"whisker-86ddc8df4c-", Namespace:"calico-system", SelfLink:"", UID:"47a93b50-1bfc-4bc7-803c-03933f5685dd", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86ddc8df4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4ca6954963", ContainerID:"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439", Pod:"whisker-86ddc8df4c-kvqlr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62cf69b0f67", MAC:"62:a3:c8:8e:d2:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:34:05.721664 containerd[1609]: 2026-04-24 23:34:05.712 [INFO][5137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439" Namespace="calico-system" Pod="whisker-86ddc8df4c-kvqlr" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--86ddc8df4c--kvqlr-eth0" Apr 24 23:34:05.758342 containerd[1609]: time="2026-04-24T23:34:05.757275439Z" level=info msg="StartContainer for \"fa4547fbe61cb7905319697823c405b1e4ed9228ac51dcb9c9338ea13a6c9982\" returns successfully" Apr 24 23:34:05.767707 containerd[1609]: time="2026-04-24T23:34:05.767421895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:34:05.767707 containerd[1609]: time="2026-04-24T23:34:05.767488373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:34:05.767707 containerd[1609]: time="2026-04-24T23:34:05.767499693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:05.770484 containerd[1609]: time="2026-04-24T23:34:05.767614851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:34:05.842198 containerd[1609]: time="2026-04-24T23:34:05.842146486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86ddc8df4c-kvqlr,Uid:47a93b50-1bfc-4bc7-803c-03933f5685dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439\"" Apr 24 23:34:05.854665 containerd[1609]: time="2026-04-24T23:34:05.854515454Z" level=info msg="CreateContainer within sandbox \"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:34:05.878666 containerd[1609]: time="2026-04-24T23:34:05.878596802Z" level=info msg="CreateContainer within sandbox \"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"40324d51e6459c3c4c252aaa46b9cbc2c613b4123975658664493d9b5d2492af\"" Apr 24 23:34:05.883214 containerd[1609]: time="2026-04-24T23:34:05.882069606Z" level=info msg="StartContainer for \"40324d51e6459c3c4c252aaa46b9cbc2c613b4123975658664493d9b5d2492af\"" Apr 24 23:34:05.968823 containerd[1609]: time="2026-04-24T23:34:05.968412221Z" level=info msg="StartContainer for \"40324d51e6459c3c4c252aaa46b9cbc2c613b4123975658664493d9b5d2492af\" returns successfully" Apr 24 23:34:05.980231 containerd[1609]: time="2026-04-24T23:34:05.980099003Z" level=info msg="CreateContainer within sandbox \"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:34:05.994450 containerd[1609]: time="2026-04-24T23:34:05.994393968Z" level=info msg="CreateContainer within sandbox \"e1eff6d62cc05710f6f7f7433a0a25e5ae1c4862e71a4564ed95c5798ee75439\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"608b14a627a8d8fb39fe54f5c803a85511df344a5088d0419cd7120bde4c60cf\"" Apr 24 23:34:05.995179 containerd[1609]: time="2026-04-24T23:34:05.995141511Z" level=info msg="StartContainer for \"608b14a627a8d8fb39fe54f5c803a85511df344a5088d0419cd7120bde4c60cf\"" Apr 24 23:34:06.076159 kubelet[2774]: I0424 23:34:06.076018 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zdz8r" podStartSLOduration=20.938272777 podStartE2EDuration="43.075993532s" podCreationTimestamp="2026-04-24 23:33:23 +0000 UTC" firstStartedPulling="2026-04-24 23:33:43.443110896 +0000 UTC m=+46.032888030" lastFinishedPulling="2026-04-24 23:34:05.580831691 +0000 UTC m=+68.170608785" observedRunningTime="2026-04-24 23:34:06.07097284 +0000 UTC m=+68.660749974" watchObservedRunningTime="2026-04-24 23:34:06.075993532 +0000 UTC m=+68.665770666" Apr 24 23:34:06.104704 containerd[1609]: time="2026-04-24T23:34:06.104567599Z" level=info msg="StartContainer for \"608b14a627a8d8fb39fe54f5c803a85511df344a5088d0419cd7120bde4c60cf\" returns successfully" Apr 24 23:34:06.671695 kubelet[2774]: I0424 23:34:06.671579 2774 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:34:06.672451 kubelet[2774]: I0424 23:34:06.671728 2774 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:34:07.777501 systemd-networkd[1246]: cali62cf69b0f67: Gained IPv6LL Apr 24 23:34:28.397288 kubelet[2774]: I0424 23:34:28.397182 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:34:28.432222 kubelet[2774]: I0424 23:34:28.432135 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86ddc8df4c-kvqlr" podStartSLOduration=23.432063889 podStartE2EDuration="23.432063889s" podCreationTimestamp="2026-04-24 23:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:34:07.102269036 +0000 UTC m=+69.692046170" watchObservedRunningTime="2026-04-24 23:34:28.432063889 +0000 UTC m=+91.021841023" Apr 24 23:34:57.569017 containerd[1609]: time="2026-04-24T23:34:57.568833486Z" level=info msg="StopPodSandbox for \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\"" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.625 [WARNING][5508] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.625 [INFO][5508] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.625 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" iface="eth0" netns="" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.625 [INFO][5508] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.625 [INFO][5508] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.659 [INFO][5515] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.659 [INFO][5515] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.659 [INFO][5515] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.674 [WARNING][5515] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.675 [INFO][5515] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.677 [INFO][5515] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:34:57.682205 containerd[1609]: 2026-04-24 23:34:57.680 [INFO][5508] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.683364 containerd[1609]: time="2026-04-24T23:34:57.682254357Z" level=info msg="TearDown network for sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" successfully" Apr 24 23:34:57.683364 containerd[1609]: time="2026-04-24T23:34:57.682286476Z" level=info msg="StopPodSandbox for \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" returns successfully" Apr 24 23:34:57.684223 containerd[1609]: time="2026-04-24T23:34:57.683736187Z" level=info msg="RemovePodSandbox for \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\"" Apr 24 23:34:57.684223 containerd[1609]: time="2026-04-24T23:34:57.683788706Z" level=info msg="Forcibly stopping sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\"" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.734 [WARNING][5529] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" WorkloadEndpoint="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.735 [INFO][5529] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.735 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" iface="eth0" netns="" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.735 [INFO][5529] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.735 [INFO][5529] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.759 [INFO][5536] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.759 [INFO][5536] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.759 [INFO][5536] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.771 [WARNING][5536] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.772 [INFO][5536] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" HandleID="k8s-pod-network.14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Workload="ci--4081--3--6--n--4ca6954963-k8s-whisker--7564959958--dsnpn-eth0" Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.774 [INFO][5536] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:34:57.780707 containerd[1609]: 2026-04-24 23:34:57.776 [INFO][5529] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333" Apr 24 23:34:57.780707 containerd[1609]: time="2026-04-24T23:34:57.779123739Z" level=info msg="TearDown network for sandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" successfully" Apr 24 23:34:57.787382 containerd[1609]: time="2026-04-24T23:34:57.787330204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:34:57.787730 containerd[1609]: time="2026-04-24T23:34:57.787706561Z" level=info msg="RemovePodSandbox \"14d6ddf014ddd8fa4a1562ed6282f3f13427aeccd28a0a0d5fa942ae4c4a9333\" returns successfully" Apr 24 23:35:05.807845 systemd[1]: Started sshd@8-91.99.220.32:22-2.57.122.177:60238.service - OpenSSH per-connection server daemon (2.57.122.177:60238). Apr 24 23:35:05.950640 sshd[5592]: Invalid user solana from 2.57.122.177 port 60238 Apr 24 23:35:05.975641 sshd[5592]: Connection closed by invalid user solana 2.57.122.177 port 60238 [preauth] Apr 24 23:35:05.980732 systemd[1]: sshd@8-91.99.220.32:22-2.57.122.177:60238.service: Deactivated successfully. Apr 24 23:35:25.575162 systemd[1]: run-containerd-runc-k8s.io-7f8db01e9982c9639421002aac68a5b6feaeae7561f3f3c72534aad039c83ea1-runc.z1bxf1.mount: Deactivated successfully. Apr 24 23:35:28.850143 systemd[1]: Started sshd@9-91.99.220.32:22-50.85.169.122:41142.service - OpenSSH per-connection server daemon (50.85.169.122:41142). Apr 24 23:35:28.976424 sshd[5704]: Accepted publickey for core from 50.85.169.122 port 41142 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:28.980108 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:28.986778 systemd-logind[1569]: New session 8 of user core. Apr 24 23:35:28.990420 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:35:29.190245 sshd[5704]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:29.199921 systemd[1]: sshd@9-91.99.220.32:22-50.85.169.122:41142.service: Deactivated successfully. Apr 24 23:35:29.204953 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:35:29.206049 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:35:29.207408 systemd-logind[1569]: Removed session 8. Apr 24 23:35:34.215578 systemd[1]: Started sshd@10-91.99.220.32:22-50.85.169.122:50522.service - OpenSSH per-connection server daemon (50.85.169.122:50522). Apr 24 23:35:34.338334 sshd[5737]: Accepted publickey for core from 50.85.169.122 port 50522 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:34.340427 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:34.350988 systemd-logind[1569]: New session 9 of user core. Apr 24 23:35:34.357294 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:35:34.545258 sshd[5737]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:34.551395 systemd[1]: sshd@10-91.99.220.32:22-50.85.169.122:50522.service: Deactivated successfully. Apr 24 23:35:34.555861 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:35:34.555938 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:35:34.557893 systemd-logind[1569]: Removed session 9. Apr 24 23:35:39.571071 systemd[1]: Started sshd@11-91.99.220.32:22-50.85.169.122:46014.service - OpenSSH per-connection server daemon (50.85.169.122:46014). Apr 24 23:35:39.694260 sshd[5754]: Accepted publickey for core from 50.85.169.122 port 46014 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:39.696995 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:39.703767 systemd-logind[1569]: New session 10 of user core. Apr 24 23:35:39.711018 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:35:39.888167 sshd[5754]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:39.896102 systemd[1]: sshd@11-91.99.220.32:22-50.85.169.122:46014.service: Deactivated successfully. Apr 24 23:35:39.901953 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:35:39.903151 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:35:39.904156 systemd-logind[1569]: Removed session 10. Apr 24 23:35:44.909410 systemd[1]: Started sshd@12-91.99.220.32:22-50.85.169.122:46020.service - OpenSSH per-connection server daemon (50.85.169.122:46020). Apr 24 23:35:45.051175 sshd[5776]: Accepted publickey for core from 50.85.169.122 port 46020 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:45.053420 sshd[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:45.060713 systemd-logind[1569]: New session 11 of user core. Apr 24 23:35:45.066465 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:35:45.269658 sshd[5776]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:45.275544 systemd[1]: sshd@12-91.99.220.32:22-50.85.169.122:46020.service: Deactivated successfully. Apr 24 23:35:45.280440 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:35:45.281134 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:35:45.283440 systemd-logind[1569]: Removed session 11. Apr 24 23:35:45.296088 systemd[1]: Started sshd@13-91.99.220.32:22-50.85.169.122:46030.service - OpenSSH per-connection server daemon (50.85.169.122:46030). Apr 24 23:35:45.424035 sshd[5800]: Accepted publickey for core from 50.85.169.122 port 46030 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:45.425803 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:45.432081 systemd-logind[1569]: New session 12 of user core. Apr 24 23:35:45.440136 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:35:45.684077 sshd[5800]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:45.692711 systemd[1]: sshd@13-91.99.220.32:22-50.85.169.122:46030.service: Deactivated successfully. Apr 24 23:35:45.700850 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:35:45.704787 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:35:45.714059 systemd[1]: Started sshd@14-91.99.220.32:22-50.85.169.122:46046.service - OpenSSH per-connection server daemon (50.85.169.122:46046). Apr 24 23:35:45.717144 systemd-logind[1569]: Removed session 12. Apr 24 23:35:45.829780 sshd[5812]: Accepted publickey for core from 50.85.169.122 port 46046 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:45.832598 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:45.838320 systemd-logind[1569]: New session 13 of user core. Apr 24 23:35:45.843140 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:35:46.049100 sshd[5812]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:46.056560 systemd[1]: sshd@14-91.99.220.32:22-50.85.169.122:46046.service: Deactivated successfully. Apr 24 23:35:46.062961 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:35:46.065343 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:35:46.067383 systemd-logind[1569]: Removed session 13. Apr 24 23:35:51.080846 systemd[1]: Started sshd@15-91.99.220.32:22-50.85.169.122:38552.service - OpenSSH per-connection server daemon (50.85.169.122:38552). Apr 24 23:35:51.220734 sshd[5866]: Accepted publickey for core from 50.85.169.122 port 38552 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:51.223536 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:51.229984 systemd-logind[1569]: New session 14 of user core. Apr 24 23:35:51.237781 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:35:51.430428 sshd[5866]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:51.438839 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:35:51.439308 systemd[1]: sshd@15-91.99.220.32:22-50.85.169.122:38552.service: Deactivated successfully. Apr 24 23:35:51.443494 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:35:51.445944 systemd-logind[1569]: Removed session 14. Apr 24 23:35:56.460351 systemd[1]: Started sshd@16-91.99.220.32:22-50.85.169.122:38556.service - OpenSSH per-connection server daemon (50.85.169.122:38556). Apr 24 23:35:56.582210 sshd[5880]: Accepted publickey for core from 50.85.169.122 port 38556 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:56.584938 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:56.590913 systemd-logind[1569]: New session 15 of user core. Apr 24 23:35:56.597394 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:35:56.775086 sshd[5880]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:56.783068 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:35:56.784083 systemd[1]: sshd@16-91.99.220.32:22-50.85.169.122:38556.service: Deactivated successfully. Apr 24 23:35:56.789368 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:35:56.793816 systemd-logind[1569]: Removed session 15. Apr 24 23:35:56.801205 systemd[1]: Started sshd@17-91.99.220.32:22-50.85.169.122:38564.service - OpenSSH per-connection server daemon (50.85.169.122:38564). Apr 24 23:35:56.921095 sshd[5894]: Accepted publickey for core from 50.85.169.122 port 38564 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:56.923751 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:56.930594 systemd-logind[1569]: New session 16 of user core. Apr 24 23:35:56.935306 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:35:57.298436 sshd[5894]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:57.305626 systemd[1]: sshd@17-91.99.220.32:22-50.85.169.122:38564.service: Deactivated successfully. Apr 24 23:35:57.310173 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:35:57.312651 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:35:57.319028 systemd[1]: Started sshd@18-91.99.220.32:22-50.85.169.122:38566.service - OpenSSH per-connection server daemon (50.85.169.122:38566). Apr 24 23:35:57.321128 systemd-logind[1569]: Removed session 16. Apr 24 23:35:57.450145 sshd[5906]: Accepted publickey for core from 50.85.169.122 port 38566 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:57.452661 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:57.462786 systemd-logind[1569]: New session 17 of user core. Apr 24 23:35:57.467280 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:35:58.158013 sshd[5906]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:58.170554 systemd[1]: sshd@18-91.99.220.32:22-50.85.169.122:38566.service: Deactivated successfully. Apr 24 23:35:58.176330 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:35:58.178577 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:35:58.195525 systemd[1]: Started sshd@19-91.99.220.32:22-50.85.169.122:38574.service - OpenSSH per-connection server daemon (50.85.169.122:38574). Apr 24 23:35:58.200576 systemd-logind[1569]: Removed session 17. Apr 24 23:35:58.330104 sshd[5935]: Accepted publickey for core from 50.85.169.122 port 38574 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:58.333118 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:58.338800 systemd-logind[1569]: New session 18 of user core. Apr 24 23:35:58.345586 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:35:58.674915 sshd[5935]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:58.682640 systemd[1]: sshd@19-91.99.220.32:22-50.85.169.122:38574.service: Deactivated successfully. Apr 24 23:35:58.691908 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:35:58.700439 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:35:58.708135 systemd[1]: Started sshd@20-91.99.220.32:22-50.85.169.122:38584.service - OpenSSH per-connection server daemon (50.85.169.122:38584). Apr 24 23:35:58.710241 systemd-logind[1569]: Removed session 18. Apr 24 23:35:58.825893 sshd[5949]: Accepted publickey for core from 50.85.169.122 port 38584 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:58.830450 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:58.836400 systemd-logind[1569]: New session 19 of user core. Apr 24 23:35:58.842178 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:35:59.008760 sshd[5949]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:59.015160 systemd[1]: sshd@20-91.99.220.32:22-50.85.169.122:38584.service: Deactivated successfully. Apr 24 23:35:59.030051 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:35:59.031790 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:35:59.035736 systemd-logind[1569]: Removed session 19. Apr 24 23:36:03.417640 systemd[1]: run-containerd-runc-k8s.io-7e4cab9c7c9daa9fceebabd065f31256389e3fd24608a45cb90109834fdc78da-runc.p5gLO8.mount: Deactivated successfully. Apr 24 23:36:04.036176 systemd[1]: Started sshd@21-91.99.220.32:22-50.85.169.122:39282.service - OpenSSH per-connection server daemon (50.85.169.122:39282). Apr 24 23:36:04.168233 sshd[6002]: Accepted publickey for core from 50.85.169.122 port 39282 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:36:04.169935 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:04.175247 systemd-logind[1569]: New session 20 of user core. Apr 24 23:36:04.180143 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:36:04.366951 sshd[6002]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:04.375254 systemd[1]: sshd@21-91.99.220.32:22-50.85.169.122:39282.service: Deactivated successfully. Apr 24 23:36:04.380371 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:36:04.381989 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:36:04.383363 systemd-logind[1569]: Removed session 20. Apr 24 23:36:09.390260 systemd[1]: Started sshd@22-91.99.220.32:22-50.85.169.122:39298.service - OpenSSH per-connection server daemon (50.85.169.122:39298). Apr 24 23:36:09.501375 sshd[6019]: Accepted publickey for core from 50.85.169.122 port 39298 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:36:09.504496 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:09.510666 systemd-logind[1569]: New session 21 of user core. Apr 24 23:36:09.517236 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:36:09.692029 sshd[6019]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:09.698956 systemd[1]: sshd@22-91.99.220.32:22-50.85.169.122:39298.service: Deactivated successfully. Apr 24 23:36:09.703864 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:36:09.704778 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:36:09.706636 systemd-logind[1569]: Removed session 21. Apr 24 23:36:14.717115 systemd[1]: Started sshd@23-91.99.220.32:22-50.85.169.122:46294.service - OpenSSH per-connection server daemon (50.85.169.122:46294). Apr 24 23:36:14.845053 sshd[6033]: Accepted publickey for core from 50.85.169.122 port 46294 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:36:14.846748 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:14.852201 systemd-logind[1569]: New session 22 of user core. Apr 24 23:36:14.861370 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:36:15.039832 sshd[6033]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:15.045935 systemd[1]: sshd@23-91.99.220.32:22-50.85.169.122:46294.service: Deactivated successfully. Apr 24 23:36:15.052446 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:36:15.055219 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:36:15.058169 systemd-logind[1569]: Removed session 22. Apr 24 23:36:29.601929 kubelet[2774]: E0424 23:36:29.601098 2774 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51328->10.0.0.2:2379: read: connection timed out" Apr 24 23:36:29.639737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5-rootfs.mount: Deactivated successfully. Apr 24 23:36:29.640355 containerd[1609]: time="2026-04-24T23:36:29.640275404Z" level=info msg="shim disconnected" id=13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5 namespace=k8s.io Apr 24 23:36:29.642827 containerd[1609]: time="2026-04-24T23:36:29.640367448Z" level=warning msg="cleaning up after shim disconnected" id=13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5 namespace=k8s.io Apr 24 23:36:29.642827 containerd[1609]: time="2026-04-24T23:36:29.640381209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:30.533335 kubelet[2774]: I0424 23:36:30.532975 2774 scope.go:117] "RemoveContainer" containerID="13a1c38c55d44f62d05f24d47c8e62c1132ebcaeda82ab16973cf521d480bbf5" Apr 24 23:36:30.538589 containerd[1609]: time="2026-04-24T23:36:30.538393970Z" level=info msg="CreateContainer within sandbox \"b35e7a7636c6a3a3485121273f056d8d1a6ccac24fe23cc71ab11e69d9a6c019\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 24 23:36:30.563147 containerd[1609]: time="2026-04-24T23:36:30.562989905Z" level=info msg="CreateContainer within sandbox \"b35e7a7636c6a3a3485121273f056d8d1a6ccac24fe23cc71ab11e69d9a6c019\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"edc7945c4c1f92c9214a82a20073c8c59ca925b123a9b5c3ae38cfbc28b8ae70\"" Apr 24 23:36:30.564753 containerd[1609]: time="2026-04-24T23:36:30.563731857Z" level=info msg="StartContainer for \"edc7945c4c1f92c9214a82a20073c8c59ca925b123a9b5c3ae38cfbc28b8ae70\"" Apr 24 23:36:30.650015 containerd[1609]: time="2026-04-24T23:36:30.649919833Z" level=info msg="StartContainer for \"edc7945c4c1f92c9214a82a20073c8c59ca925b123a9b5c3ae38cfbc28b8ae70\" returns successfully" Apr 24 23:36:30.995198 containerd[1609]: time="2026-04-24T23:36:30.995099955Z" level=info msg="shim disconnected" id=8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea namespace=k8s.io Apr 24 23:36:30.995198 containerd[1609]: time="2026-04-24T23:36:30.995164318Z" level=warning msg="cleaning up after shim disconnected" id=8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea namespace=k8s.io Apr 24 23:36:30.995564 containerd[1609]: time="2026-04-24T23:36:30.995173439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:31.042594 containerd[1609]: time="2026-04-24T23:36:31.042398200Z" level=info msg="shim disconnected" id=8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07 namespace=k8s.io Apr 24 23:36:31.042594 containerd[1609]: time="2026-04-24T23:36:31.042471603Z" level=warning msg="cleaning up after shim disconnected" id=8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07 namespace=k8s.io Apr 24 23:36:31.042594 containerd[1609]: time="2026-04-24T23:36:31.042482844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:31.539990 kubelet[2774]: I0424 23:36:31.539717 2774 scope.go:117] "RemoveContainer" containerID="8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07" Apr 24 23:36:31.547453 containerd[1609]: time="2026-04-24T23:36:31.545178752Z" level=info msg="CreateContainer within sandbox \"82d2830db54b5157a2f5befbab5c980142c1d2138f518424212a3c6d6c639927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 24 23:36:31.549847 kubelet[2774]: I0424 23:36:31.546048 2774 scope.go:117] "RemoveContainer" containerID="8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea" Apr 24 23:36:31.559289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a107a4cf8cdf96fe915688f22ec05a6f63f11c95f8b46192acda77cc14be7ea-rootfs.mount: Deactivated successfully. Apr 24 23:36:31.560756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a8347d8e593f80885fa603cd883a3e71aaec64d9b1222c0b53c386b375f3d07-rootfs.mount: Deactivated successfully. Apr 24 23:36:31.564652 containerd[1609]: time="2026-04-24T23:36:31.559667525Z" level=info msg="CreateContainer within sandbox \"c918c8c146730c4ed311c83927db4faeca26137b42c545557c41d84a737c2f41\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 24 23:36:31.586385 containerd[1609]: time="2026-04-24T23:36:31.585954757Z" level=info msg="CreateContainer within sandbox \"82d2830db54b5157a2f5befbab5c980142c1d2138f518424212a3c6d6c639927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0936442c359a2ed156c4c851d070cbd55909fffaaddf326e71650d2b08350580\"" Apr 24 23:36:31.592257 containerd[1609]: time="2026-04-24T23:36:31.592196901Z" level=info msg="StartContainer for \"0936442c359a2ed156c4c851d070cbd55909fffaaddf326e71650d2b08350580\"" Apr 24 23:36:31.613305 containerd[1609]: time="2026-04-24T23:36:31.613110306Z" level=info msg="CreateContainer within sandbox \"c918c8c146730c4ed311c83927db4faeca26137b42c545557c41d84a737c2f41\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6d06277313314fb6e7b85247468adacf7ef94c99ebe1cde5bad9a46576d296d9\"" Apr 24 23:36:31.614412 containerd[1609]: time="2026-04-24T23:36:31.614215793Z" level=info msg="StartContainer for \"6d06277313314fb6e7b85247468adacf7ef94c99ebe1cde5bad9a46576d296d9\"" Apr 24 23:36:31.732448 containerd[1609]: time="2026-04-24T23:36:31.732241826Z" level=info msg="StartContainer for \"6d06277313314fb6e7b85247468adacf7ef94c99ebe1cde5bad9a46576d296d9\" returns successfully" Apr 24 23:36:31.761914 containerd[1609]: time="2026-04-24T23:36:31.760188729Z" level=info msg="StartContainer for \"0936442c359a2ed156c4c851d070cbd55909fffaaddf326e71650d2b08350580\" returns successfully" Apr 24 23:36:33.633296 kubelet[2774]: E0424 23:36:33.631311 2774 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51146->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-4ca6954963.18a96f30c314b7d6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-4ca6954963,UID:dfdd80473ac93efbba343464200b2552,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-4ca6954963,},FirstTimestamp:2026-04-24 23:36:23.196399574 +0000 UTC m=+205.786176708,LastTimestamp:2026-04-24 23:36:23.196399574 +0000 UTC m=+205.786176708,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-4ca6954963,}"