Apr 16 23:28:56.055351 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Apr 16 23:28:56.055367 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 16 23:28:56.055374 kernel: KASLR enabled Apr 16 23:28:56.055378 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 16 23:28:56.055382 kernel: printk: legacy bootconsole [pl11] enabled Apr 16 23:28:56.055386 kernel: efi: EFI v2.7 by EDK II Apr 16 23:28:56.055392 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f8018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Apr 16 23:28:56.055396 kernel: random: crng init done Apr 16 23:28:56.055399 kernel: secureboot: Secure boot disabled Apr 16 23:28:56.055403 kernel: ACPI: Early table checksum verification disabled Apr 16 23:28:56.055407 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Apr 16 23:28:56.055411 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055415 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055419 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 16 23:28:56.055424 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055429 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055433 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055437 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055441 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055446 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055451 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 16 23:28:56.055455 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 16 23:28:56.055459 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 16 23:28:56.055463 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 23:28:56.055467 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 16 23:28:56.055471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Apr 16 23:28:56.055476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Apr 16 23:28:56.055480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 16 23:28:56.055484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 16 23:28:56.055488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 16 23:28:56.055493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 16 23:28:56.055497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 16 23:28:56.055501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 16 23:28:56.055506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 16 23:28:56.055510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 16 23:28:56.055514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 16 23:28:56.055518 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Apr 16 23:28:56.055522 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Apr 16 23:28:56.055526 kernel: Zone ranges: Apr 16 23:28:56.055531 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 16 23:28:56.055537 kernel: DMA32 empty Apr 16 23:28:56.055542 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 16 23:28:56.055546 kernel: Device empty Apr 16 23:28:56.055550 kernel: Movable zone start for each node Apr 16 23:28:56.055555 kernel: Early memory node ranges Apr 16 23:28:56.055559 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 16 23:28:56.055564 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Apr 16 23:28:56.055568 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Apr 16 23:28:56.055573 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Apr 16 23:28:56.055577 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Apr 16 23:28:56.055581 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Apr 16 23:28:56.055586 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 16 23:28:56.055590 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 16 23:28:56.055594 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 16 23:28:56.055599 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Apr 16 23:28:56.055603 kernel: psci: probing for conduit method from ACPI. Apr 16 23:28:56.055607 kernel: psci: PSCIv1.3 detected in firmware. Apr 16 23:28:56.055612 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 23:28:56.055617 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 16 23:28:56.055621 kernel: psci: SMC Calling Convention v1.4 Apr 16 23:28:56.055625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 16 23:28:56.055630 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 16 23:28:56.055634 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 23:28:56.055638 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 23:28:56.055643 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 23:28:56.055647 kernel: Detected PIPT I-cache on CPU0 Apr 16 23:28:56.055651 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Apr 16 23:28:56.055656 kernel: CPU features: detected: GIC system register CPU interface Apr 16 23:28:56.055660 kernel: CPU features: detected: Spectre-v4 Apr 16 23:28:56.055664 kernel: CPU features: detected: Spectre-BHB Apr 16 23:28:56.055669 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 16 23:28:56.055674 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 16 23:28:56.055678 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Apr 16 23:28:56.055683 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 16 23:28:56.055687 kernel: alternatives: applying boot alternatives Apr 16 23:28:56.055692 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:28:56.055697 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:28:56.055701 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:28:56.055705 kernel: Fallback order for Node 0: 0 Apr 16 23:28:56.055710 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Apr 16 23:28:56.055715 kernel: Policy zone: Normal Apr 16 23:28:56.055719 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:28:56.055723 kernel: software IO TLB: area num 2. Apr 16 23:28:56.055728 kernel: software IO TLB: mapped [mem 0x00000000358f0000-0x00000000398f0000] (64MB) Apr 16 23:28:56.055732 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:28:56.055737 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:28:56.055741 kernel: rcu: RCU event tracing is enabled. Apr 16 23:28:56.055746 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:28:56.055750 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:28:56.055755 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:28:56.055759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:28:56.055763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:28:56.055769 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:28:56.055773 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:28:56.055777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 23:28:56.055782 kernel: GICv3: 960 SPIs implemented Apr 16 23:28:56.055786 kernel: GICv3: 0 Extended SPIs implemented Apr 16 23:28:56.055790 kernel: Root IRQ handler: gic_handle_irq Apr 16 23:28:56.055795 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 16 23:28:56.055799 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Apr 16 23:28:56.055803 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 16 23:28:56.055808 kernel: ITS: No ITS available, not enabling LPIs Apr 16 23:28:56.055812 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:28:56.055817 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Apr 16 23:28:56.055822 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 23:28:56.055826 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Apr 16 23:28:56.055831 kernel: Console: colour dummy device 80x25 Apr 16 23:28:56.055835 kernel: printk: legacy console [tty1] enabled Apr 16 23:28:56.055840 kernel: ACPI: Core revision 20240827 Apr 16 23:28:56.055845 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Apr 16 23:28:56.055849 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:28:56.055854 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:28:56.055858 kernel: landlock: Up and running. Apr 16 23:28:56.055863 kernel: SELinux: Initializing. Apr 16 23:28:56.055868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:28:56.055872 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:28:56.055877 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Apr 16 23:28:56.055882 kernel: Hyper-V: Host Build 10.0.26102.1283-1-0 Apr 16 23:28:56.055889 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 16 23:28:56.055895 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:28:56.055899 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:28:56.055904 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:28:56.055909 kernel: Remapping and enabling EFI services. Apr 16 23:28:56.055913 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:28:56.055918 kernel: Detected PIPT I-cache on CPU1 Apr 16 23:28:56.055924 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 16 23:28:56.055928 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Apr 16 23:28:56.055933 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:28:56.055938 kernel: SMP: Total of 2 processors activated. Apr 16 23:28:56.055942 kernel: CPU: All CPU(s) started at EL1 Apr 16 23:28:56.055948 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 23:28:56.055953 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 16 23:28:56.055957 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 16 23:28:56.055962 kernel: CPU features: detected: Common not Private translations Apr 16 23:28:56.055967 kernel: CPU features: detected: CRC32 instructions Apr 16 23:28:56.055972 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Apr 16 23:28:56.055976 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 16 23:28:56.055981 kernel: CPU features: detected: LSE atomic instructions Apr 16 23:28:56.055986 kernel: CPU features: detected: Privileged Access Never Apr 16 23:28:56.055991 kernel: CPU features: detected: Speculation barrier (SB) Apr 16 23:28:56.055996 kernel: CPU features: detected: TLB range maintenance instructions Apr 16 23:28:56.056001 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 16 23:28:56.056005 kernel: CPU features: detected: Scalable Vector Extension Apr 16 23:28:56.056010 kernel: alternatives: applying system-wide alternatives Apr 16 23:28:56.056015 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 16 23:28:56.056019 kernel: SVE: maximum available vector length 16 bytes per vector Apr 16 23:28:56.056024 kernel: SVE: default vector length 16 bytes per vector Apr 16 23:28:56.056029 kernel: Memory: 3952756K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 220208K reserved, 16384K cma-reserved) Apr 16 23:28:56.056035 kernel: devtmpfs: initialized Apr 16 23:28:56.056039 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:28:56.056044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:28:56.056049 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 16 23:28:56.056054 kernel: 0 pages in range for non-PLT usage Apr 16 23:28:56.056058 kernel: 508384 pages in range for PLT usage Apr 16 23:28:56.056063 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:28:56.056068 kernel: SMBIOS 3.1.0 present. Apr 16 23:28:56.056073 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 16 23:28:56.056087 kernel: DMI: Memory slots populated: 2/2 Apr 16 23:28:56.056093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:28:56.056097 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 23:28:56.056102 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 23:28:56.056107 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 23:28:56.056112 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:28:56.056116 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Apr 16 23:28:56.056121 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:28:56.056127 kernel: cpuidle: using governor menu Apr 16 23:28:56.056131 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 23:28:56.056136 kernel: ASID allocator initialised with 32768 entries Apr 16 23:28:56.056141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:28:56.056146 kernel: Serial: AMBA PL011 UART driver Apr 16 23:28:56.056150 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:28:56.056155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:28:56.056160 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 23:28:56.056165 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 23:28:56.056170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:28:56.056175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:28:56.056179 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 23:28:56.056184 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 23:28:56.056189 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:28:56.056193 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:28:56.056198 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:28:56.056203 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:28:56.056207 kernel: ACPI: Interpreter enabled Apr 16 23:28:56.056213 kernel: ACPI: Using GIC for interrupt routing Apr 16 23:28:56.056218 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 16 23:28:56.056222 kernel: printk: legacy console [ttyAMA0] enabled Apr 16 23:28:56.056227 kernel: printk: legacy bootconsole [pl11] disabled Apr 16 23:28:56.056232 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 16 23:28:56.056236 kernel: ACPI: CPU0 has been hot-added Apr 16 23:28:56.056241 kernel: ACPI: CPU1 has been hot-added Apr 16 23:28:56.056246 kernel: iommu: Default domain type: Translated Apr 16 23:28:56.056251 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 23:28:56.056256 kernel: efivars: Registered efivars operations Apr 16 23:28:56.056261 kernel: vgaarb: loaded Apr 16 23:28:56.056265 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 23:28:56.056270 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:28:56.056274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:28:56.056279 kernel: pnp: PnP ACPI init Apr 16 23:28:56.056284 kernel: pnp: PnP ACPI: found 0 devices Apr 16 23:28:56.056288 kernel: NET: Registered PF_INET protocol family Apr 16 23:28:56.056293 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:28:56.056298 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:28:56.056304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:28:56.056308 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:28:56.056313 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:28:56.056318 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:28:56.056323 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:28:56.056327 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:28:56.056332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:28:56.056337 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:28:56.056341 kernel: kvm [1]: HYP mode not available Apr 16 23:28:56.056347 kernel: Initialise system trusted keyrings Apr 16 23:28:56.056351 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:28:56.056356 kernel: Key type asymmetric registered Apr 16 23:28:56.056361 kernel: Asymmetric key parser 'x509' registered Apr 16 23:28:56.056365 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 23:28:56.056370 kernel: io scheduler mq-deadline registered Apr 16 23:28:56.056375 kernel: io scheduler kyber registered Apr 16 23:28:56.056380 kernel: io scheduler bfq registered Apr 16 23:28:56.056385 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:28:56.056390 kernel: thunder_xcv, ver 1.0 Apr 16 23:28:56.056395 kernel: thunder_bgx, ver 1.0 Apr 16 23:28:56.056399 kernel: nicpf, ver 1.0 Apr 16 23:28:56.056404 kernel: nicvf, ver 1.0 Apr 16 23:28:56.056501 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 23:28:56.056551 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T23:28:55 UTC (1776382135) Apr 16 23:28:56.056557 kernel: efifb: probing for efifb Apr 16 23:28:56.056563 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 16 23:28:56.056568 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 16 23:28:56.056573 kernel: efifb: scrolling: redraw Apr 16 23:28:56.056578 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 16 23:28:56.056582 kernel: Console: switching to colour frame buffer device 128x48 Apr 16 23:28:56.056587 kernel: fb0: EFI VGA frame buffer device Apr 16 23:28:56.056592 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 16 23:28:56.056596 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:28:56.056601 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 16 23:28:56.056607 kernel: watchdog: NMI not fully supported Apr 16 23:28:56.056612 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:28:56.056617 kernel: watchdog: Hard watchdog permanently disabled Apr 16 23:28:56.056621 kernel: Segment Routing with IPv6 Apr 16 23:28:56.056626 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:28:56.056631 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:28:56.056635 kernel: Key type dns_resolver registered Apr 16 23:28:56.056640 kernel: registered taskstats version 1 Apr 16 23:28:56.056645 kernel: Loading compiled-in X.509 certificates Apr 16 23:28:56.056650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 16 23:28:56.056655 kernel: Demotion targets for Node 0: null Apr 16 23:28:56.056660 kernel: Key type .fscrypt registered Apr 16 23:28:56.056664 kernel: Key type fscrypt-provisioning registered Apr 16 23:28:56.056669 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:28:56.056674 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:28:56.056678 kernel: ima: No architecture policies found Apr 16 23:28:56.056683 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 23:28:56.056688 kernel: clk: Disabling unused clocks Apr 16 23:28:56.056693 kernel: PM: genpd: Disabling unused power domains Apr 16 23:28:56.056698 kernel: Warning: unable to open an initial console. Apr 16 23:28:56.056703 kernel: Freeing unused kernel memory: 39552K Apr 16 23:28:56.056708 kernel: Run /init as init process Apr 16 23:28:56.056712 kernel: with arguments: Apr 16 23:28:56.056717 kernel: /init Apr 16 23:28:56.056722 kernel: with environment: Apr 16 23:28:56.056726 kernel: HOME=/ Apr 16 23:28:56.056731 kernel: TERM=linux Apr 16 23:28:56.056736 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:28:56.056744 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:28:56.056750 systemd[1]: Detected virtualization microsoft. Apr 16 23:28:56.056755 systemd[1]: Detected architecture arm64. Apr 16 23:28:56.056760 systemd[1]: Running in initrd. Apr 16 23:28:56.056765 systemd[1]: No hostname configured, using default hostname. Apr 16 23:28:56.056770 systemd[1]: Hostname set to . Apr 16 23:28:56.056775 systemd[1]: Initializing machine ID from random generator. Apr 16 23:28:56.056781 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:28:56.056787 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:28:56.056792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:28:56.056797 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:28:56.056803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:28:56.056808 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:28:56.056813 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:28:56.056820 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:28:56.056826 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:28:56.056831 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:28:56.056836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:28:56.056841 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:28:56.056846 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:28:56.056851 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:28:56.056857 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:28:56.056863 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:28:56.056868 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:28:56.056873 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:28:56.056878 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:28:56.056884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:28:56.056889 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:28:56.056894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:28:56.056899 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:28:56.056905 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:28:56.056910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:28:56.056915 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:28:56.056921 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:28:56.056926 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:28:56.056931 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:28:56.056936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:28:56.056950 systemd-journald[225]: Collecting audit messages is disabled. Apr 16 23:28:56.056965 systemd-journald[225]: Journal started Apr 16 23:28:56.056979 systemd-journald[225]: Runtime Journal (/run/log/journal/0d7b5d6d70a0434aa273c7da078fd56d) is 8M, max 78.3M, 70.3M free. Apr 16 23:28:56.065107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:56.069704 systemd-modules-load[227]: Inserted module 'overlay' Apr 16 23:28:56.089091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:28:56.089116 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:28:56.095246 kernel: Bridge firewalling registered Apr 16 23:28:56.095304 systemd-modules-load[227]: Inserted module 'br_netfilter' Apr 16 23:28:56.099933 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:28:56.112381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:28:56.120265 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:28:56.127677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:28:56.135925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:56.144706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:28:56.155815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:28:56.173178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:28:56.190896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:28:56.203854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:28:56.210096 systemd-tmpfiles[257]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:28:56.212318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:28:56.226054 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:28:56.234997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:28:56.246810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:28:56.261710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:28:56.271227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:28:56.286263 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:28:56.292381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:28:56.336575 systemd-resolved[264]: Positive Trust Anchors: Apr 16 23:28:56.336587 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:28:56.336606 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:28:56.339229 systemd-resolved[264]: Defaulting to hostname 'linux'. Apr 16 23:28:56.339848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:28:56.345237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:28:56.436097 kernel: SCSI subsystem initialized Apr 16 23:28:56.442088 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:28:56.449095 kernel: iscsi: registered transport (tcp) Apr 16 23:28:56.461372 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:28:56.461383 kernel: QLogic iSCSI HBA Driver Apr 16 23:28:56.474357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:28:56.489182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:28:56.495339 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:28:56.541823 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:28:56.549197 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:28:56.623085 kernel: raid6: neonx8 gen() 18539 MB/s Apr 16 23:28:56.629093 kernel: raid6: neonx4 gen() 18568 MB/s Apr 16 23:28:56.646089 kernel: raid6: neonx2 gen() 17071 MB/s Apr 16 23:28:56.665088 kernel: raid6: neonx1 gen() 15114 MB/s Apr 16 23:28:56.685162 kernel: raid6: int64x8 gen() 10555 MB/s Apr 16 23:28:56.704087 kernel: raid6: int64x4 gen() 10611 MB/s Apr 16 23:28:56.725094 kernel: raid6: int64x2 gen() 8991 MB/s Apr 16 23:28:56.745176 kernel: raid6: int64x1 gen() 7032 MB/s Apr 16 23:28:56.745188 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Apr 16 23:28:56.767321 kernel: raid6: .... xor() 15142 MB/s, rmw enabled Apr 16 23:28:56.767327 kernel: raid6: using neon recovery algorithm Apr 16 23:28:56.775210 kernel: xor: measuring software checksum speed Apr 16 23:28:56.775218 kernel: 8regs : 28659 MB/sec Apr 16 23:28:56.777727 kernel: 32regs : 28783 MB/sec Apr 16 23:28:56.780170 kernel: arm64_neon : 37753 MB/sec Apr 16 23:28:56.783103 kernel: xor: using function: arm64_neon (37753 MB/sec) Apr 16 23:28:56.821103 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:28:56.825533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:28:56.835202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:28:56.857852 systemd-udevd[475]: Using default interface naming scheme 'v255'. Apr 16 23:28:56.861708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:28:56.873361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:28:56.895849 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Apr 16 23:28:56.915135 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:28:56.922708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:28:56.965154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:28:56.976863 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:28:57.031402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:28:57.041157 kernel: hv_vmbus: Vmbus version:5.3 Apr 16 23:28:57.031506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:57.048898 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:57.072296 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 16 23:28:57.072311 kernel: hv_vmbus: registering driver hid_hyperv Apr 16 23:28:57.072318 kernel: hv_vmbus: registering driver hv_netvsc Apr 16 23:28:57.072324 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 16 23:28:57.072332 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 16 23:28:57.061265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:57.089406 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 16 23:28:57.096797 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 16 23:28:57.091607 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:28:57.108166 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 16 23:28:57.115099 kernel: PTP clock support registered Apr 16 23:28:57.126178 kernel: hv_utils: Registering HyperV Utility Driver Apr 16 23:28:57.126207 kernel: hv_vmbus: registering driver hv_utils Apr 16 23:28:57.129168 kernel: hv_vmbus: registering driver hv_storvsc Apr 16 23:28:57.129196 kernel: hv_utils: Heartbeat IC version 3.0 Apr 16 23:28:57.134201 kernel: hv_utils: Shutdown IC version 3.2 Apr 16 23:28:57.137386 kernel: hv_utils: TimeSync IC version 4.0 Apr 16 23:28:57.137525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:57.317036 kernel: scsi host1: storvsc_host_t Apr 16 23:28:57.317160 kernel: scsi host0: storvsc_host_t Apr 16 23:28:57.317229 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 16 23:28:57.302686 systemd-resolved[264]: Clock change detected. Flushing caches. Apr 16 23:28:57.330016 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 16 23:28:57.330038 kernel: hv_netvsc 7ced8dd0-bb9d-7ced-8dd0-bb9d7ced8dd0 eth0: VF slot 1 added Apr 16 23:28:57.349216 kernel: hv_vmbus: registering driver hv_pci Apr 16 23:28:57.349245 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 16 23:28:57.357032 kernel: hv_pci 9b1ba6da-51f6-4ebe-b36b-9e5a17911e3a: PCI VMBus probing: Using version 0x10004 Apr 16 23:28:57.357113 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 16 23:28:57.357177 kernel: hv_pci 9b1ba6da-51f6-4ebe-b36b-9e5a17911e3a: PCI host bridge to bus 51f6:00 Apr 16 23:28:57.363480 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 16 23:28:57.363577 kernel: pci_bus 51f6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 16 23:28:57.363652 kernel: pci_bus 51f6:00: No busn resource found for root bus, will use [bus 00-ff] Apr 16 23:28:57.372121 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 16 23:28:57.372199 kernel: pci 51f6:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Apr 16 23:28:57.381691 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 16 23:28:57.387528 kernel: pci 51f6:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 16 23:28:57.393510 kernel: pci 51f6:00:02.0: enabling Extended Tags Apr 16 23:28:57.407553 kernel: pci 51f6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51f6:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Apr 16 23:28:57.416272 kernel: pci_bus 51f6:00: busn_res: [bus 00-ff] end is updated to 00 Apr 16 23:28:57.416379 kernel: pci 51f6:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Apr 16 23:28:57.428643 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:57.428668 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 16 23:28:57.431239 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 16 23:28:57.435663 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 23:28:57.438504 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 16 23:28:57.455492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 16 23:28:57.478494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 16 23:28:57.505658 kernel: mlx5_core 51f6:00:02.0: enabling device (0000 -> 0002) Apr 16 23:28:57.513579 kernel: mlx5_core 51f6:00:02.0: PTM is not supported by PCIe Apr 16 23:28:57.513704 kernel: mlx5_core 51f6:00:02.0: firmware version: 16.30.5026 Apr 16 23:28:57.682254 kernel: hv_netvsc 7ced8dd0-bb9d-7ced-8dd0-bb9d7ced8dd0 eth0: VF registering: eth1 Apr 16 23:28:57.682445 kernel: mlx5_core 51f6:00:02.0 eth1: joined to eth0 Apr 16 23:28:57.687650 kernel: mlx5_core 51f6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 16 23:28:57.696526 kernel: mlx5_core 51f6:00:02.0 enP20982s1: renamed from eth1 Apr 16 23:28:57.866588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 16 23:28:57.944717 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 16 23:28:57.956248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 16 23:28:57.997235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 16 23:28:58.002378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 16 23:28:58.009497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:28:58.019073 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:28:58.028619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:28:58.037077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:28:58.046912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:28:58.067063 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:28:58.094637 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:58.097978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:28:59.117376 disk-uuid[656]: The operation has completed successfully. Apr 16 23:28:59.122225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:59.187910 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:28:59.191557 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:28:59.215370 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:28:59.236800 sh[821]: Success Apr 16 23:28:59.267524 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:28:59.267560 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:28:59.272373 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:28:59.282499 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 23:28:59.530832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:28:59.536533 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:28:59.549947 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:28:59.572498 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (839) Apr 16 23:28:59.582627 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 16 23:28:59.582661 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:28:59.881118 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:28:59.881199 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:28:59.931456 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:28:59.935178 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:28:59.942795 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:28:59.943374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:28:59.963007 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:28:59.993526 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (872) Apr 16 23:28:59.993556 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:29:00.003321 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:29:00.031084 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:29:00.031117 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:29:00.040518 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:29:00.040739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:29:00.045471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:29:00.081577 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:29:00.092249 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:29:00.126217 systemd-networkd[1008]: lo: Link UP Apr 16 23:29:00.126228 systemd-networkd[1008]: lo: Gained carrier Apr 16 23:29:00.126927 systemd-networkd[1008]: Enumeration completed Apr 16 23:29:00.129538 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:29:00.129541 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:29:00.129659 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:29:00.136598 systemd[1]: Reached target network.target - Network. Apr 16 23:29:00.204502 kernel: mlx5_core 51f6:00:02.0 enP20982s1: Link up Apr 16 23:29:00.237491 kernel: hv_netvsc 7ced8dd0-bb9d-7ced-8dd0-bb9d7ced8dd0 eth0: Data path switched to VF: enP20982s1 Apr 16 23:29:00.237942 systemd-networkd[1008]: enP20982s1: Link UP Apr 16 23:29:00.237998 systemd-networkd[1008]: eth0: Link UP Apr 16 23:29:00.238070 systemd-networkd[1008]: eth0: Gained carrier Apr 16 23:29:00.238079 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:29:00.242600 systemd-networkd[1008]: enP20982s1: Gained carrier Apr 16 23:29:00.256517 systemd-networkd[1008]: eth0: DHCPv4 address 10.0.0.6/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 16 23:29:01.236433 ignition[963]: Ignition 2.22.0 Apr 16 23:29:01.236450 ignition[963]: Stage: fetch-offline Apr 16 23:29:01.240774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:29:01.236569 ignition[963]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:01.248785 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:29:01.236576 ignition[963]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:01.236650 ignition[963]: parsed url from cmdline: "" Apr 16 23:29:01.236652 ignition[963]: no config URL provided Apr 16 23:29:01.236655 ignition[963]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:29:01.236660 ignition[963]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:29:01.236666 ignition[963]: failed to fetch config: resource requires networking Apr 16 23:29:01.236936 ignition[963]: Ignition finished successfully Apr 16 23:29:01.286965 ignition[1020]: Ignition 2.22.0 Apr 16 23:29:01.286980 ignition[1020]: Stage: fetch Apr 16 23:29:01.287167 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:01.287175 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:01.287244 ignition[1020]: parsed url from cmdline: "" Apr 16 23:29:01.287246 ignition[1020]: no config URL provided Apr 16 23:29:01.287250 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:29:01.287255 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:29:01.287270 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 16 23:29:01.419946 ignition[1020]: GET result: OK Apr 16 23:29:01.420067 ignition[1020]: config has been read from IMDS userdata Apr 16 23:29:01.422552 unknown[1020]: fetched base config from "system" Apr 16 23:29:01.420090 ignition[1020]: parsing config with SHA512: 273babb3b51c4c138771d1d71c1854c300cb60b27afc0ec4ed7578f3ba7dcceea7b9e5590b6efed03e5a65d2884444feb774b358a7d196f6fd8ed1cdd5099978 Apr 16 23:29:01.422557 unknown[1020]: fetched base config from "system" Apr 16 23:29:01.422987 ignition[1020]: fetch: fetch complete Apr 16 23:29:01.422560 unknown[1020]: fetched user config from "azure" Apr 16 23:29:01.424361 ignition[1020]: fetch: fetch passed Apr 16 23:29:01.426271 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:29:01.424414 ignition[1020]: Ignition finished successfully Apr 16 23:29:01.435505 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:29:01.472546 ignition[1026]: Ignition 2.22.0 Apr 16 23:29:01.472555 ignition[1026]: Stage: kargs Apr 16 23:29:01.476546 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:29:01.472724 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:01.483292 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:29:01.472731 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:01.473274 ignition[1026]: kargs: kargs passed Apr 16 23:29:01.473316 ignition[1026]: Ignition finished successfully Apr 16 23:29:01.513559 ignition[1033]: Ignition 2.22.0 Apr 16 23:29:01.513570 ignition[1033]: Stage: disks Apr 16 23:29:01.517402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:29:01.513738 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:01.523282 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:29:01.513745 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:01.531971 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:29:01.514320 ignition[1033]: disks: disks passed Apr 16 23:29:01.540179 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:29:01.514363 ignition[1033]: Ignition finished successfully Apr 16 23:29:01.548682 systemd-networkd[1008]: eth0: Gained IPv6LL Apr 16 23:29:01.548726 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:29:01.552953 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:29:01.561926 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:29:01.652552 systemd-fsck[1041]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Apr 16 23:29:01.659912 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:29:01.666250 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:29:01.900508 kernel: EXT4-fs (sda9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 16 23:29:01.901756 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:29:01.908184 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:29:01.927482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:29:01.934396 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:29:01.945353 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 23:29:01.955628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:29:01.982666 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1055) Apr 16 23:29:01.982686 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:29:01.982694 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:29:01.955666 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:29:01.975153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:29:02.005546 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:29:02.005566 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:29:01.993140 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:29:02.011592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:29:02.582105 coreos-metadata[1057]: Apr 16 23:29:02.582 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 16 23:29:02.588563 coreos-metadata[1057]: Apr 16 23:29:02.588 INFO Fetch successful Apr 16 23:29:02.588563 coreos-metadata[1057]: Apr 16 23:29:02.588 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 16 23:29:02.600501 coreos-metadata[1057]: Apr 16 23:29:02.600 INFO Fetch successful Apr 16 23:29:02.615397 coreos-metadata[1057]: Apr 16 23:29:02.615 INFO wrote hostname ci-4459.2.4-n-b3358a4beb to /sysroot/etc/hostname Apr 16 23:29:02.622688 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:29:02.815966 initrd-setup-root[1085]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:29:02.866343 initrd-setup-root[1092]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:29:02.874281 initrd-setup-root[1099]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:29:02.881188 initrd-setup-root[1106]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:29:04.054855 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:29:04.061339 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:29:04.079955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:29:04.091408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:29:04.101499 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:29:04.123226 ignition[1174]: INFO : Ignition 2.22.0 Apr 16 23:29:04.123226 ignition[1174]: INFO : Stage: mount Apr 16 23:29:04.123226 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:04.123226 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:04.123226 ignition[1174]: INFO : mount: mount passed Apr 16 23:29:04.123226 ignition[1174]: INFO : Ignition finished successfully Apr 16 23:29:04.126740 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:29:04.131545 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:29:04.140162 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:29:04.172593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:29:04.193502 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Apr 16 23:29:04.204226 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:29:04.204247 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:29:04.213643 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:29:04.213656 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:29:04.215641 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:29:04.245780 ignition[1204]: INFO : Ignition 2.22.0 Apr 16 23:29:04.245780 ignition[1204]: INFO : Stage: files Apr 16 23:29:04.252331 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:04.252331 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:04.252331 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:29:04.252331 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:29:04.252331 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:29:04.309236 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:29:04.316881 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:29:04.316881 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:29:04.309642 unknown[1204]: wrote ssh authorized keys file for user: core Apr 16 23:29:04.357560 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:29:04.365793 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 23:29:04.389451 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:29:04.533524 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 23:29:04.541745 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 16 23:29:05.024435 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:29:06.087423 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 23:29:06.087423 ignition[1204]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:29:06.121472 ignition[1204]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:29:06.130733 ignition[1204]: INFO : files: files passed Apr 16 23:29:06.130733 ignition[1204]: INFO : Ignition finished successfully Apr 16 23:29:06.131086 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:29:06.144184 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:29:06.175916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:29:06.222428 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:29:06.222428 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:29:06.199462 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:29:06.253748 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:29:06.199548 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:29:06.206909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:29:06.217900 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:29:06.227870 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:29:06.285045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:29:06.285134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:29:06.295664 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:29:06.306692 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:29:06.315198 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:29:06.315726 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:29:06.347410 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:29:06.353777 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:29:06.378315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:29:06.383240 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:29:06.393142 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:29:06.402143 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:29:06.402222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:29:06.414351 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:29:06.419347 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:29:06.428653 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:29:06.437971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:29:06.446355 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:29:06.456513 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:29:06.466296 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:29:06.474754 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:29:06.484334 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:29:06.492317 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:29:06.501331 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:29:06.508482 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:29:06.508578 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:29:06.519897 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:29:06.524554 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:29:06.533704 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:29:06.537620 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:29:06.543263 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:29:06.543341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:29:06.558053 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:29:06.558131 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:29:06.564168 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:29:06.564239 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:29:06.572060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 23:29:06.642610 ignition[1258]: INFO : Ignition 2.22.0 Apr 16 23:29:06.642610 ignition[1258]: INFO : Stage: umount Apr 16 23:29:06.642610 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:29:06.642610 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 16 23:29:06.642610 ignition[1258]: INFO : umount: umount passed Apr 16 23:29:06.642610 ignition[1258]: INFO : Ignition finished successfully Apr 16 23:29:06.572123 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:29:06.583810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:29:06.597144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:29:06.597253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:29:06.614640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:29:06.626068 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:29:06.626171 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:29:06.640582 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:29:06.640659 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:29:06.650197 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:29:06.650260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:29:06.659164 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:29:06.659233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:29:06.673479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:29:06.673812 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:29:06.673839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:29:06.678776 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:29:06.678812 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:29:06.686450 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:29:06.686482 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:29:06.690601 systemd[1]: Stopped target network.target - Network. Apr 16 23:29:06.694037 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:29:06.694070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:29:06.703049 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:29:06.717330 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:29:06.720501 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:29:06.726174 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:29:06.735823 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:29:06.743036 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:29:06.743067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:29:06.751116 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:29:06.751138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:29:06.759518 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:29:06.759559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:29:06.767775 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:29:06.767799 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:29:06.780309 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:29:06.791545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:29:06.814197 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:29:06.814317 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:29:06.830593 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:29:07.054024 kernel: hv_netvsc 7ced8dd0-bb9d-7ced-8dd0-bb9d7ced8dd0 eth0: Data path switched from VF: enP20982s1 Apr 16 23:29:06.830748 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:29:06.830833 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:29:06.844088 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:29:06.844731 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:29:06.852656 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:29:06.852690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:29:06.864297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:29:06.879206 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:29:06.879260 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:29:06.891695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:29:06.891743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:29:06.905178 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:29:06.905212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:29:06.910085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:29:06.910117 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:29:06.927595 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:29:06.936831 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:29:06.936876 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:29:06.951979 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:29:06.959582 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:29:06.970230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:29:06.970260 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:29:06.978504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:29:06.978528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:29:06.987056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:29:06.987089 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:29:07.000132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:29:07.000165 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:29:07.010209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:29:07.010239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:29:07.021414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:29:07.029461 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:29:07.029520 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:29:07.053888 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:29:07.053927 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:29:07.065233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:29:07.065275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:29:07.076010 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:29:07.076049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:29:07.076075 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:29:07.076318 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:29:07.076394 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:29:07.085510 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:29:07.085571 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:29:07.096215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:29:07.096296 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:29:07.114969 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:29:07.115064 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:29:07.123680 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:29:07.134889 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:29:07.158895 systemd[1]: Switching root. Apr 16 23:29:07.382415 systemd-journald[225]: Journal stopped Apr 16 23:29:12.589782 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Apr 16 23:29:12.589801 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:29:12.589809 kernel: SELinux: policy capability open_perms=1 Apr 16 23:29:12.589814 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:29:12.589821 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:29:12.589826 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:29:12.589832 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:29:12.589837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:29:12.589843 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:29:12.589848 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:29:12.589853 kernel: audit: type=1403 audit(1776382148.323:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:29:12.589860 systemd[1]: Successfully loaded SELinux policy in 194.534ms. Apr 16 23:29:12.589867 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.364ms. Apr 16 23:29:12.589874 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:29:12.589882 systemd[1]: Detected virtualization microsoft. Apr 16 23:29:12.589889 systemd[1]: Detected architecture arm64. Apr 16 23:29:12.589894 systemd[1]: Detected first boot. Apr 16 23:29:12.589901 systemd[1]: Hostname set to . Apr 16 23:29:12.589906 systemd[1]: Initializing machine ID from random generator. Apr 16 23:29:12.589912 zram_generator::config[1300]: No configuration found. Apr 16 23:29:12.589919 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:29:12.589924 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:29:12.589930 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:29:12.589937 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:29:12.589943 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:29:12.589948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:29:12.589954 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:29:12.589961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:29:12.589967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:29:12.589973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:29:12.589980 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:29:12.589986 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:29:12.589992 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:29:12.589998 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:29:12.590004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:29:12.590011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:29:12.590017 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:29:12.590023 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:29:12.590029 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:29:12.590036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:29:12.590043 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 16 23:29:12.590050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:29:12.590056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:29:12.590062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:29:12.590068 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:29:12.590074 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:29:12.590081 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:29:12.590087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:29:12.590093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:29:12.590099 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:29:12.590105 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:29:12.590112 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:29:12.590118 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:29:12.590125 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:29:12.590131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:29:12.590138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:29:12.590144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:29:12.590150 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:29:12.590156 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:29:12.590163 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:29:12.590169 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:29:12.590176 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:29:12.590182 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:29:12.590188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:29:12.590194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:29:12.590201 systemd[1]: Reached target machines.target - Containers. Apr 16 23:29:12.590207 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:29:12.590214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:29:12.590220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:29:12.590226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:29:12.590233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:29:12.590239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:29:12.590245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:29:12.590251 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:29:12.590257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:29:12.590264 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:29:12.590270 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:29:12.590277 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:29:12.590283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:29:12.590289 kernel: fuse: init (API version 7.41) Apr 16 23:29:12.590294 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:29:12.590308 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:29:12.590314 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:29:12.590321 kernel: loop: module loaded Apr 16 23:29:12.590326 kernel: ACPI: bus type drm_connector registered Apr 16 23:29:12.590332 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:29:12.590339 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:29:12.590345 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:29:12.590364 systemd-journald[1404]: Collecting audit messages is disabled. Apr 16 23:29:12.590378 systemd-journald[1404]: Journal started Apr 16 23:29:12.590392 systemd-journald[1404]: Runtime Journal (/run/log/journal/6a5ac950a4364da89dc21a31835af689) is 8M, max 78.3M, 70.3M free. Apr 16 23:29:11.768369 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:29:11.778961 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:29:11.779320 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:29:11.779591 systemd[1]: systemd-journald.service: Consumed 2.470s CPU time. Apr 16 23:29:12.608644 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:29:12.620625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:29:12.630742 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:29:12.630798 systemd[1]: Stopped verity-setup.service. Apr 16 23:29:12.645023 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:29:12.645653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:29:12.650249 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:29:12.655173 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:29:12.659540 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:29:12.664630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:29:12.669525 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:29:12.674205 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:29:12.680028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:29:12.685820 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:29:12.685947 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:29:12.692102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:29:12.692218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:29:12.697326 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:29:12.697448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:29:12.702073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:29:12.702185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:29:12.707922 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:29:12.708040 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:29:12.712965 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:29:12.713074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:29:12.718049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:29:12.723421 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:29:12.728942 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:29:12.734742 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:29:12.740931 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:29:12.754331 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:29:12.760433 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:29:12.775560 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:29:12.780493 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:29:12.780573 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:29:12.785715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:29:12.791990 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:29:12.796447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:29:12.822409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:29:12.837102 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:29:12.842861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:29:12.843558 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:29:12.848962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:29:12.849607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:29:12.856595 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:29:12.863061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:29:12.871932 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:29:12.879093 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:29:12.888972 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:29:12.889532 kernel: loop0: detected capacity change from 0 to 200864 Apr 16 23:29:12.896056 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:29:12.899024 systemd-journald[1404]: Time spent on flushing to /var/log/journal/6a5ac950a4364da89dc21a31835af689 is 10.057ms for 930 entries. Apr 16 23:29:12.899024 systemd-journald[1404]: System Journal (/var/log/journal/6a5ac950a4364da89dc21a31835af689) is 8M, max 2.6G, 2.6G free. Apr 16 23:29:12.946548 systemd-journald[1404]: Received client request to flush runtime journal. Apr 16 23:29:12.946628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:29:12.911022 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:29:12.937521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:29:12.951226 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:29:12.970268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:29:12.977639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:29:12.990962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:29:12.999644 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:29:13.005501 kernel: loop1: detected capacity change from 0 to 100632 Apr 16 23:29:13.119494 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Apr 16 23:29:13.119506 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Apr 16 23:29:13.122148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:29:13.366572 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:29:13.375397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:29:13.403198 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Apr 16 23:29:13.478512 kernel: loop2: detected capacity change from 0 to 27936 Apr 16 23:29:13.635660 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:29:13.646931 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:29:13.696641 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:29:13.732970 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 16 23:29:13.765615 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:29:13.795506 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:29:13.819504 kernel: hv_vmbus: registering driver hv_balloon Apr 16 23:29:13.831709 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 16 23:29:13.831763 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 16 23:29:13.851509 kernel: hv_vmbus: registering driver hyperv_fb Apr 16 23:29:13.851576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#208 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 16 23:29:13.895562 systemd-networkd[1480]: lo: Link UP Apr 16 23:29:13.895891 systemd-networkd[1480]: lo: Gained carrier Apr 16 23:29:13.897421 systemd-networkd[1480]: Enumeration completed Apr 16 23:29:13.897585 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:29:13.897940 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:29:13.898001 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:29:13.900760 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 16 23:29:13.900834 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 16 23:29:13.910248 kernel: Console: switching to colour dummy device 80x25 Apr 16 23:29:13.914706 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:29:13.919154 kernel: Console: switching to colour frame buffer device 128x48 Apr 16 23:29:13.933602 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:29:13.949580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:29:13.962399 kernel: loop3: detected capacity change from 0 to 119840 Apr 16 23:29:13.973515 kernel: mlx5_core 51f6:00:02.0 enP20982s1: Link up Apr 16 23:29:13.976172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:29:13.976406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:29:13.982446 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:29:13.984283 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:29:14.007502 kernel: hv_netvsc 7ced8dd0-bb9d-7ced-8dd0-bb9d7ced8dd0 eth0: Data path switched to VF: enP20982s1 Apr 16 23:29:14.008598 systemd-networkd[1480]: enP20982s1: Link UP Apr 16 23:29:14.008754 systemd-networkd[1480]: eth0: Link UP Apr 16 23:29:14.008757 systemd-networkd[1480]: eth0: Gained carrier Apr 16 23:29:14.008775 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:29:14.009308 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:29:14.020911 systemd-networkd[1480]: enP20982s1: Gained carrier Apr 16 23:29:14.028527 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.6/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 16 23:29:14.048507 kernel: MACsec IEEE 802.1AE Apr 16 23:29:14.194410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 16 23:29:14.200468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:29:14.255548 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:29:14.311506 kernel: loop4: detected capacity change from 0 to 200864 Apr 16 23:29:14.330522 kernel: loop5: detected capacity change from 0 to 100632 Apr 16 23:29:14.347502 kernel: loop6: detected capacity change from 0 to 27936 Apr 16 23:29:14.367505 kernel: loop7: detected capacity change from 0 to 119840 Apr 16 23:29:14.376053 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 16 23:29:14.376402 (sd-merge)[1607]: Merged extensions into '/usr'. Apr 16 23:29:14.379457 systemd[1]: Reload requested from client PID 1439 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:29:14.379557 systemd[1]: Reloading... Apr 16 23:29:14.426524 zram_generator::config[1638]: No configuration found. Apr 16 23:29:14.604524 systemd[1]: Reloading finished in 224 ms. Apr 16 23:29:14.617328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:29:14.624015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:29:14.641317 systemd[1]: Starting ensure-sysext.service... Apr 16 23:29:14.646610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:29:14.668588 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:29:14.668598 systemd[1]: Reloading... Apr 16 23:29:14.668762 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:29:14.668973 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:29:14.669210 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:29:14.669424 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:29:14.669921 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:29:14.670196 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Apr 16 23:29:14.670306 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Apr 16 23:29:14.712418 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:29:14.712693 systemd-tmpfiles[1696]: Skipping /boot Apr 16 23:29:14.718508 zram_generator::config[1723]: No configuration found. Apr 16 23:29:14.720241 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:29:14.720254 systemd-tmpfiles[1696]: Skipping /boot Apr 16 23:29:14.871789 systemd[1]: Reloading finished in 202 ms. Apr 16 23:29:14.891335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:29:14.900875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:29:14.919040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:29:14.932659 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:29:14.938382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:29:14.945733 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:29:14.954075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:29:14.957547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:29:14.964721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:29:14.972651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:29:14.976894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:29:14.976982 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:29:14.979128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:29:14.979325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:29:14.986848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:29:14.986969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:29:14.993181 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:29:14.993299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:29:15.003000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:29:15.004056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:29:15.011734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:29:15.018493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:29:15.023813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:29:15.023935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:29:15.028628 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:29:15.036273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:29:15.036384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:29:15.041945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:29:15.042052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:29:15.048351 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:29:15.048458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:29:15.053381 systemd-networkd[1480]: eth0: Gained IPv6LL Apr 16 23:29:15.057864 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:29:15.067360 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:29:15.076135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:29:15.077608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:29:15.084268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:29:15.093639 systemd-resolved[1787]: Positive Trust Anchors: Apr 16 23:29:15.093649 systemd-resolved[1787]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:29:15.093669 systemd-resolved[1787]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:29:15.095724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:29:15.104778 systemd-resolved[1787]: Using system hostname 'ci-4459.2.4-n-b3358a4beb'. Apr 16 23:29:15.106661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:29:15.111377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:29:15.111482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:29:15.111621 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:29:15.117613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:29:15.122960 systemd[1]: Finished ensure-sysext.service. Apr 16 23:29:15.126814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:29:15.127030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:29:15.132394 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:29:15.132717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:29:15.138066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:29:15.138256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:29:15.143481 augenrules[1828]: No rules Apr 16 23:29:15.144065 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:29:15.144271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:29:15.148931 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:29:15.149118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:29:15.158419 systemd[1]: Reached target network.target - Network. Apr 16 23:29:15.162290 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:29:15.166800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:29:15.171606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:29:15.171658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:29:15.502848 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:29:15.508400 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:29:18.270043 ldconfig[1434]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:29:18.285431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:29:18.293077 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:29:18.309518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:29:18.314323 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:29:18.318982 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:29:18.323825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:29:18.328761 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:29:18.333131 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:29:18.338161 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:29:18.343015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:29:18.343035 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:29:18.346732 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:29:18.368996 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:29:18.374410 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:29:18.379429 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:29:18.384631 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:29:18.389672 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:29:18.395239 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:29:18.399969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:29:18.405087 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:29:18.409402 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:29:18.413317 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:29:18.417205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:29:18.417225 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:29:18.418957 systemd[1]: Starting chronyd.service - NTP client/server... Apr 16 23:29:18.429563 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:29:18.434618 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:29:18.440611 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:29:18.448601 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:29:18.461311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:29:18.466330 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:29:18.469501 jq[1853]: false Apr 16 23:29:18.470445 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:29:18.471784 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 16 23:29:18.477788 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 16 23:29:18.478568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:18.484342 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:29:18.489694 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:29:18.491192 KVP[1855]: KVP starting; pid is:1855 Apr 16 23:29:18.497195 chronyd[1845]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Apr 16 23:29:18.499888 kernel: hv_utils: KVP IC version 4.0 Apr 16 23:29:18.497621 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:29:18.499362 KVP[1855]: KVP LIC Version: 3.1 Apr 16 23:29:18.508358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:29:18.516266 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:29:18.520316 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Apr 16 23:29:18.520425 chronyd[1845]: Loaded seccomp filter (level 2) Apr 16 23:29:18.523593 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:29:18.530832 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:29:18.531227 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:29:18.533795 extend-filesystems[1854]: Found /dev/sda6 Apr 16 23:29:18.536725 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:29:18.544664 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:29:18.550918 systemd[1]: Started chronyd.service - NTP client/server. Apr 16 23:29:18.555183 extend-filesystems[1854]: Found /dev/sda9 Apr 16 23:29:18.567126 extend-filesystems[1854]: Checking size of /dev/sda9 Apr 16 23:29:18.560131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:29:18.571042 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:29:18.571169 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:29:18.578808 jq[1879]: true Apr 16 23:29:18.571937 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:29:18.572058 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:29:18.581965 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:29:18.582106 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:29:18.590113 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:29:18.605939 extend-filesystems[1854]: Old size kept for /dev/sda9 Apr 16 23:29:18.609998 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:29:18.615648 jq[1890]: true Apr 16 23:29:18.611116 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:29:18.622499 update_engine[1873]: I20260416 23:29:18.620069 1873 main.cc:92] Flatcar Update Engine starting Apr 16 23:29:18.620954 (ntainerd)[1891]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:29:18.642061 systemd-logind[1868]: New seat seat0. Apr 16 23:29:18.643824 systemd-logind[1868]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 23:29:18.645573 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:29:18.668823 tar[1885]: linux-arm64/LICENSE Apr 16 23:29:18.668823 tar[1885]: linux-arm64/helm Apr 16 23:29:18.719522 bash[1930]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:29:18.722512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:29:18.729168 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 23:29:18.758354 dbus-daemon[1848]: [system] SELinux support is enabled Apr 16 23:29:18.758686 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:29:18.767681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:29:18.767707 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:29:18.775663 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:29:18.775680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:29:18.788207 update_engine[1873]: I20260416 23:29:18.781251 1873 update_check_scheduler.cc:74] Next update check in 2m22s Apr 16 23:29:18.783921 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 23:29:18.783883 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:29:18.798856 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:29:18.838038 sshd_keygen[1883]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:29:18.892253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:29:18.898394 coreos-metadata[1847]: Apr 16 23:29:18.898 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 16 23:29:18.902450 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:29:18.907537 coreos-metadata[1847]: Apr 16 23:29:18.907 INFO Fetch successful Apr 16 23:29:18.907537 coreos-metadata[1847]: Apr 16 23:29:18.907 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 16 23:29:18.912022 coreos-metadata[1847]: Apr 16 23:29:18.911 INFO Fetch successful Apr 16 23:29:18.912129 coreos-metadata[1847]: Apr 16 23:29:18.912 INFO Fetching http://168.63.129.16/machine/18e51685-a96e-4778-8ed2-18f4c8513ab5/ae4e7f23%2D85cb%2D4112%2Db3c7%2D352e05bfd065.%5Fci%2D4459.2.4%2Dn%2Db3358a4beb?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 16 23:29:18.916702 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 16 23:29:18.924140 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:29:18.925788 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:29:18.946406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:29:18.946972 coreos-metadata[1847]: Apr 16 23:29:18.946 INFO Fetch successful Apr 16 23:29:18.947061 coreos-metadata[1847]: Apr 16 23:29:18.947 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 16 23:29:18.961207 coreos-metadata[1847]: Apr 16 23:29:18.960 INFO Fetch successful Apr 16 23:29:18.977464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:29:18.991237 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:29:19.002670 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 16 23:29:19.009972 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:29:19.015305 locksmithd[1957]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:29:19.018337 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 16 23:29:19.033369 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:29:19.039031 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:29:19.171585 tar[1885]: linux-arm64/README.md Apr 16 23:29:19.181356 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:29:19.340820 containerd[1891]: time="2026-04-16T23:29:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:29:19.341438 containerd[1891]: time="2026-04-16T23:29:19.341408660Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348514524Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.72µs" Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348537788Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348551892Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348665084Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348676300Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348690908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348722444Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348748 containerd[1891]: time="2026-04-16T23:29:19.348729172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348890 containerd[1891]: time="2026-04-16T23:29:19.348874556Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348890 containerd[1891]: time="2026-04-16T23:29:19.348884180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348913 containerd[1891]: time="2026-04-16T23:29:19.348890916Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:29:19.348913 containerd[1891]: time="2026-04-16T23:29:19.348896532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:29:19.349345 containerd[1891]: time="2026-04-16T23:29:19.348943276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:29:19.349345 containerd[1891]: time="2026-04-16T23:29:19.349280108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:29:19.349345 containerd[1891]: time="2026-04-16T23:29:19.349306076Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:29:19.349345 containerd[1891]: time="2026-04-16T23:29:19.349312756Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:29:19.349345 containerd[1891]: time="2026-04-16T23:29:19.349337908Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:29:19.349520 containerd[1891]: time="2026-04-16T23:29:19.349505068Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:29:19.349572 containerd[1891]: time="2026-04-16T23:29:19.349560900Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:29:19.368440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:19.380573 containerd[1891]: time="2026-04-16T23:29:19.380546764Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:29:19.380633 containerd[1891]: time="2026-04-16T23:29:19.380593244Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:29:19.380633 containerd[1891]: time="2026-04-16T23:29:19.380603652Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:29:19.380633 containerd[1891]: time="2026-04-16T23:29:19.380611844Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:29:19.380633 containerd[1891]: time="2026-04-16T23:29:19.380619572Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:29:19.380633 containerd[1891]: time="2026-04-16T23:29:19.380627180Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380636772Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380644540Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380652524Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380662460Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380668108Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:29:19.380694 containerd[1891]: time="2026-04-16T23:29:19.380676884Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:29:19.380780 containerd[1891]: time="2026-04-16T23:29:19.380766004Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:29:19.380804 containerd[1891]: time="2026-04-16T23:29:19.380786492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:29:19.380804 containerd[1891]: time="2026-04-16T23:29:19.380796260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:29:19.380804 containerd[1891]: time="2026-04-16T23:29:19.380803748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380810644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380817996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380824948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380831452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380841628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380848356Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:29:19.380854 containerd[1891]: time="2026-04-16T23:29:19.380854884Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:29:19.380958 containerd[1891]: time="2026-04-16T23:29:19.380892884Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:29:19.380958 containerd[1891]: time="2026-04-16T23:29:19.380902220Z" level=info msg="Start snapshots syncer" Apr 16 23:29:19.380958 containerd[1891]: time="2026-04-16T23:29:19.380917836Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:29:19.381121 containerd[1891]: time="2026-04-16T23:29:19.381093796Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:29:19.381219 containerd[1891]: time="2026-04-16T23:29:19.381133756Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:29:19.381219 containerd[1891]: time="2026-04-16T23:29:19.381166916Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:29:19.381257 containerd[1891]: time="2026-04-16T23:29:19.381242692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:29:19.381282 containerd[1891]: time="2026-04-16T23:29:19.381261156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:29:19.381282 containerd[1891]: time="2026-04-16T23:29:19.381268420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:29:19.381282 containerd[1891]: time="2026-04-16T23:29:19.381275572Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381282884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381289876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381296460Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381312156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381321572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:29:19.381336 containerd[1891]: time="2026-04-16T23:29:19.381332068Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381350292Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381359316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381364660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381370388Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381375260Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:29:19.381430 containerd[1891]: time="2026-04-16T23:29:19.381380828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:29:19.381674 containerd[1891]: time="2026-04-16T23:29:19.381655524Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:29:19.382481 containerd[1891]: time="2026-04-16T23:29:19.382416796Z" level=info msg="runtime interface created" Apr 16 23:29:19.382481 containerd[1891]: time="2026-04-16T23:29:19.382440684Z" level=info msg="created NRI interface" Apr 16 23:29:19.382647 containerd[1891]: time="2026-04-16T23:29:19.382553092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:29:19.382775 containerd[1891]: time="2026-04-16T23:29:19.382709860Z" level=info msg="Connect containerd service" Apr 16 23:29:19.382775 containerd[1891]: time="2026-04-16T23:29:19.382737900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:29:19.383402 containerd[1891]: time="2026-04-16T23:29:19.383385172Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:29:19.408226 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:19.704582 kubelet[2038]: E0416 23:29:19.704466 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:19.706580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:19.706681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:19.706955 systemd[1]: kubelet.service: Consumed 487ms CPU time, 248.3M memory peak. Apr 16 23:29:19.913522 containerd[1891]: time="2026-04-16T23:29:19.913395764Z" level=info msg="Start subscribing containerd event" Apr 16 23:29:19.913522 containerd[1891]: time="2026-04-16T23:29:19.913463852Z" level=info msg="Start recovering state" Apr 16 23:29:19.913619 containerd[1891]: time="2026-04-16T23:29:19.913560268Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:29:19.913619 containerd[1891]: time="2026-04-16T23:29:19.913602220Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:29:19.913719 containerd[1891]: time="2026-04-16T23:29:19.913705700Z" level=info msg="Start event monitor" Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913806092Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913820652Z" level=info msg="Start streaming server" Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913831508Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913837884Z" level=info msg="runtime interface starting up..." Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913841844Z" level=info msg="starting plugins..." Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913855884Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:29:19.914068 containerd[1891]: time="2026-04-16T23:29:19.913960060Z" level=info msg="containerd successfully booted in 0.573645s" Apr 16 23:29:19.914072 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:29:19.921388 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:29:19.930665 systemd[1]: Startup finished in 1.623s (kernel) + 12.352s (initrd) + 11.800s (userspace) = 25.776s. Apr 16 23:29:20.213880 login[2014]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 16 23:29:20.215464 login[2015]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:20.223323 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:29:20.224096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:29:20.226614 systemd-logind[1868]: New session 1 of user core. Apr 16 23:29:20.258023 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:29:20.259093 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:29:20.272017 (systemd)[2064]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:29:20.273858 systemd-logind[1868]: New session c1 of user core. Apr 16 23:29:20.372826 systemd[2064]: Queued start job for default target default.target. Apr 16 23:29:20.379153 systemd[2064]: Created slice app.slice - User Application Slice. Apr 16 23:29:20.379264 systemd[2064]: Reached target paths.target - Paths. Apr 16 23:29:20.379306 systemd[2064]: Reached target timers.target - Timers. Apr 16 23:29:20.380285 systemd[2064]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:29:20.387416 systemd[2064]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:29:20.387562 systemd[2064]: Reached target sockets.target - Sockets. Apr 16 23:29:20.387656 systemd[2064]: Reached target basic.target - Basic System. Apr 16 23:29:20.387741 systemd[2064]: Reached target default.target - Main User Target. Apr 16 23:29:20.387775 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:29:20.387949 systemd[2064]: Startup finished in 109ms. Apr 16 23:29:20.392601 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:29:20.756361 waagent[2018]: 2026-04-16T23:29:20.756297Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Apr 16 23:29:20.761498 waagent[2018]: 2026-04-16T23:29:20.760734Z INFO Daemon Daemon OS: flatcar 4459.2.4 Apr 16 23:29:20.763977 waagent[2018]: 2026-04-16T23:29:20.763949Z INFO Daemon Daemon Python: 3.11.13 Apr 16 23:29:20.767207 waagent[2018]: 2026-04-16T23:29:20.767173Z INFO Daemon Daemon Run daemon Apr 16 23:29:20.770231 waagent[2018]: 2026-04-16T23:29:20.770202Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Apr 16 23:29:20.776735 waagent[2018]: 2026-04-16T23:29:20.776581Z INFO Daemon Daemon Using waagent for provisioning Apr 16 23:29:20.780424 waagent[2018]: 2026-04-16T23:29:20.780392Z INFO Daemon Daemon Activate resource disk Apr 16 23:29:20.783902 waagent[2018]: 2026-04-16T23:29:20.783875Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 16 23:29:20.791888 waagent[2018]: 2026-04-16T23:29:20.791854Z INFO Daemon Daemon Found device: None Apr 16 23:29:20.795113 waagent[2018]: 2026-04-16T23:29:20.795086Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 16 23:29:20.801209 waagent[2018]: 2026-04-16T23:29:20.801186Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 16 23:29:20.809585 waagent[2018]: 2026-04-16T23:29:20.809538Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 16 23:29:20.813698 waagent[2018]: 2026-04-16T23:29:20.813672Z INFO Daemon Daemon Running default provisioning handler Apr 16 23:29:20.822527 waagent[2018]: 2026-04-16T23:29:20.822493Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 16 23:29:20.832630 waagent[2018]: 2026-04-16T23:29:20.832595Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 16 23:29:20.839560 waagent[2018]: 2026-04-16T23:29:20.839534Z INFO Daemon Daemon cloud-init is enabled: False Apr 16 23:29:20.843072 waagent[2018]: 2026-04-16T23:29:20.843052Z INFO Daemon Daemon Copying ovf-env.xml Apr 16 23:29:20.903621 waagent[2018]: 2026-04-16T23:29:20.903496Z INFO Daemon Daemon Successfully mounted dvd Apr 16 23:29:20.928677 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 16 23:29:20.930723 waagent[2018]: 2026-04-16T23:29:20.930683Z INFO Daemon Daemon Detect protocol endpoint Apr 16 23:29:20.934302 waagent[2018]: 2026-04-16T23:29:20.934271Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 16 23:29:20.938423 waagent[2018]: 2026-04-16T23:29:20.938399Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 16 23:29:20.943474 waagent[2018]: 2026-04-16T23:29:20.943450Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 16 23:29:20.947325 waagent[2018]: 2026-04-16T23:29:20.947298Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 16 23:29:20.951012 waagent[2018]: 2026-04-16T23:29:20.950989Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 16 23:29:21.002203 waagent[2018]: 2026-04-16T23:29:21.002166Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 16 23:29:21.007175 waagent[2018]: 2026-04-16T23:29:21.007126Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 16 23:29:21.011060 waagent[2018]: 2026-04-16T23:29:21.011038Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 16 23:29:21.107471 waagent[2018]: 2026-04-16T23:29:21.107414Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 16 23:29:21.112192 waagent[2018]: 2026-04-16T23:29:21.112164Z INFO Daemon Daemon Forcing an update of the goal state. Apr 16 23:29:21.119027 waagent[2018]: 2026-04-16T23:29:21.118995Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 16 23:29:21.137448 waagent[2018]: 2026-04-16T23:29:21.137416Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Apr 16 23:29:21.142037 waagent[2018]: 2026-04-16T23:29:21.142006Z INFO Daemon Apr 16 23:29:21.144593 waagent[2018]: 2026-04-16T23:29:21.144566Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 48ce09cd-0f57-4249-ad8f-bdb93d2edc06 eTag: 16355803719780943705 source: Fabric] Apr 16 23:29:21.153228 waagent[2018]: 2026-04-16T23:29:21.153198Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 16 23:29:21.158059 waagent[2018]: 2026-04-16T23:29:21.158031Z INFO Daemon Apr 16 23:29:21.160204 waagent[2018]: 2026-04-16T23:29:21.160178Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 16 23:29:21.168326 waagent[2018]: 2026-04-16T23:29:21.168300Z INFO Daemon Daemon Downloading artifacts profile blob Apr 16 23:29:21.214206 login[2014]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:21.217738 systemd-logind[1868]: New session 2 of user core. Apr 16 23:29:21.227585 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:29:21.284075 waagent[2018]: 2026-04-16T23:29:21.283998Z INFO Daemon Downloaded certificate {'thumbprint': 'DF3D2576F1B4F1EF9AE2E9B14517C987D563308A', 'hasPrivateKey': True} Apr 16 23:29:21.291401 waagent[2018]: 2026-04-16T23:29:21.291368Z INFO Daemon Fetch goal state completed Apr 16 23:29:21.329648 waagent[2018]: 2026-04-16T23:29:21.329616Z INFO Daemon Daemon Starting provisioning Apr 16 23:29:21.333827 waagent[2018]: 2026-04-16T23:29:21.333796Z INFO Daemon Daemon Handle ovf-env.xml. Apr 16 23:29:21.337637 waagent[2018]: 2026-04-16T23:29:21.337613Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-b3358a4beb] Apr 16 23:29:21.343094 waagent[2018]: 2026-04-16T23:29:21.343054Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-b3358a4beb] Apr 16 23:29:21.347508 waagent[2018]: 2026-04-16T23:29:21.347464Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 16 23:29:21.352017 waagent[2018]: 2026-04-16T23:29:21.351988Z INFO Daemon Daemon Primary interface is [eth0] Apr 16 23:29:21.361047 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:29:21.361053 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:29:21.361076 systemd-networkd[1480]: eth0: DHCP lease lost Apr 16 23:29:21.362094 waagent[2018]: 2026-04-16T23:29:21.362031Z INFO Daemon Daemon Create user account if not exists Apr 16 23:29:21.366553 waagent[2018]: 2026-04-16T23:29:21.366511Z INFO Daemon Daemon User core already exists, skip useradd Apr 16 23:29:21.371121 waagent[2018]: 2026-04-16T23:29:21.371094Z INFO Daemon Daemon Configure sudoer Apr 16 23:29:21.374461 waagent[2018]: 2026-04-16T23:29:21.374428Z INFO Daemon Daemon Configure sshd Apr 16 23:29:21.377771 waagent[2018]: 2026-04-16T23:29:21.377732Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 16 23:29:21.387434 waagent[2018]: 2026-04-16T23:29:21.387396Z INFO Daemon Daemon Deploy ssh public key. Apr 16 23:29:21.402524 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.6/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 16 23:29:22.512095 waagent[2018]: 2026-04-16T23:29:22.512050Z INFO Daemon Daemon Provisioning complete Apr 16 23:29:22.525160 waagent[2018]: 2026-04-16T23:29:22.525126Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 16 23:29:22.529999 waagent[2018]: 2026-04-16T23:29:22.529969Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 16 23:29:22.537560 waagent[2018]: 2026-04-16T23:29:22.537533Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Apr 16 23:29:22.633524 waagent[2114]: 2026-04-16T23:29:22.632664Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Apr 16 23:29:22.633524 waagent[2114]: 2026-04-16T23:29:22.632766Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Apr 16 23:29:22.633524 waagent[2114]: 2026-04-16T23:29:22.632804Z INFO ExtHandler ExtHandler Python: 3.11.13 Apr 16 23:29:22.633524 waagent[2114]: 2026-04-16T23:29:22.632838Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 16 23:29:22.666155 waagent[2114]: 2026-04-16T23:29:22.666118Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Apr 16 23:29:22.666413 waagent[2114]: 2026-04-16T23:29:22.666385Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 16 23:29:22.666548 waagent[2114]: 2026-04-16T23:29:22.666520Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 16 23:29:22.671714 waagent[2114]: 2026-04-16T23:29:22.671674Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 16 23:29:22.675980 waagent[2114]: 2026-04-16T23:29:22.675951Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Apr 16 23:29:22.676373 waagent[2114]: 2026-04-16T23:29:22.676346Z INFO ExtHandler Apr 16 23:29:22.676533 waagent[2114]: 2026-04-16T23:29:22.676506Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: dc89d6aa-5c9a-4a97-89ed-e36a243570fd eTag: 16355803719780943705 source: Fabric] Apr 16 23:29:22.676902 waagent[2114]: 2026-04-16T23:29:22.676874Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 16 23:29:22.677375 waagent[2114]: 2026-04-16T23:29:22.677347Z INFO ExtHandler Apr 16 23:29:22.677522 waagent[2114]: 2026-04-16T23:29:22.677466Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 16 23:29:22.681840 waagent[2114]: 2026-04-16T23:29:22.681813Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 16 23:29:22.734525 waagent[2114]: 2026-04-16T23:29:22.734122Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DF3D2576F1B4F1EF9AE2E9B14517C987D563308A', 'hasPrivateKey': True} Apr 16 23:29:22.734616 waagent[2114]: 2026-04-16T23:29:22.734575Z INFO ExtHandler Fetch goal state completed Apr 16 23:29:22.745237 waagent[2114]: 2026-04-16T23:29:22.745196Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Apr 16 23:29:22.748248 waagent[2114]: 2026-04-16T23:29:22.748207Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2114 Apr 16 23:29:22.748337 waagent[2114]: 2026-04-16T23:29:22.748311Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 16 23:29:22.748588 waagent[2114]: 2026-04-16T23:29:22.748561Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Apr 16 23:29:22.749595 waagent[2114]: 2026-04-16T23:29:22.749565Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Apr 16 23:29:22.749892 waagent[2114]: 2026-04-16T23:29:22.749865Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Apr 16 23:29:22.749991 waagent[2114]: 2026-04-16T23:29:22.749970Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Apr 16 23:29:22.750377 waagent[2114]: 2026-04-16T23:29:22.750349Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 16 23:29:22.832036 waagent[2114]: 2026-04-16T23:29:22.832008Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 16 23:29:22.832163 waagent[2114]: 2026-04-16T23:29:22.832137Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 16 23:29:22.836432 waagent[2114]: 2026-04-16T23:29:22.836397Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 16 23:29:22.840578 systemd[1]: Reload requested from client PID 2129 ('systemctl') (unit waagent.service)... Apr 16 23:29:22.840591 systemd[1]: Reloading... Apr 16 23:29:22.902540 zram_generator::config[2171]: No configuration found. Apr 16 23:29:23.047829 systemd[1]: Reloading finished in 207 ms. Apr 16 23:29:23.058506 waagent[2114]: 2026-04-16T23:29:23.057205Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 16 23:29:23.058506 waagent[2114]: 2026-04-16T23:29:23.057331Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 16 23:29:23.536226 waagent[2114]: 2026-04-16T23:29:23.535481Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 16 23:29:23.536226 waagent[2114]: 2026-04-16T23:29:23.535795Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Apr 16 23:29:23.536479 waagent[2114]: 2026-04-16T23:29:23.536440Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 16 23:29:23.536542 waagent[2114]: 2026-04-16T23:29:23.536504Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 16 23:29:23.536631 waagent[2114]: 2026-04-16T23:29:23.536604Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 16 23:29:23.536850 waagent[2114]: 2026-04-16T23:29:23.536818Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 16 23:29:23.537106 waagent[2114]: 2026-04-16T23:29:23.537073Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 16 23:29:23.537225 waagent[2114]: 2026-04-16T23:29:23.537195Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 16 23:29:23.537225 waagent[2114]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 16 23:29:23.537225 waagent[2114]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 16 23:29:23.537225 waagent[2114]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 16 23:29:23.537225 waagent[2114]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 16 23:29:23.537225 waagent[2114]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 16 23:29:23.537225 waagent[2114]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 16 23:29:23.537709 waagent[2114]: 2026-04-16T23:29:23.537670Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 16 23:29:23.537813 waagent[2114]: 2026-04-16T23:29:23.537775Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 16 23:29:23.537910 waagent[2114]: 2026-04-16T23:29:23.537885Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 16 23:29:23.537953 waagent[2114]: 2026-04-16T23:29:23.537936Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 16 23:29:23.538064 waagent[2114]: 2026-04-16T23:29:23.538039Z INFO EnvHandler ExtHandler Configure routes Apr 16 23:29:23.538460 waagent[2114]: 2026-04-16T23:29:23.538429Z INFO EnvHandler ExtHandler Gateway:None Apr 16 23:29:23.538518 waagent[2114]: 2026-04-16T23:29:23.538494Z INFO EnvHandler ExtHandler Routes:None Apr 16 23:29:23.538691 waagent[2114]: 2026-04-16T23:29:23.538658Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 16 23:29:23.538895 waagent[2114]: 2026-04-16T23:29:23.538862Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 16 23:29:23.539504 waagent[2114]: 2026-04-16T23:29:23.539376Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 16 23:29:23.545055 waagent[2114]: 2026-04-16T23:29:23.545019Z INFO ExtHandler ExtHandler Apr 16 23:29:23.545097 waagent[2114]: 2026-04-16T23:29:23.545084Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 35a37fec-27f7-46de-8485-e038d2b4c9eb correlation c20748e5-76b1-44c3-ac38-df71d6a0694e created: 2026-04-16T23:28:24.434776Z] Apr 16 23:29:23.545349 waagent[2114]: 2026-04-16T23:29:23.545320Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 16 23:29:23.545739 waagent[2114]: 2026-04-16T23:29:23.545715Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Apr 16 23:29:23.568361 waagent[2114]: 2026-04-16T23:29:23.568127Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Apr 16 23:29:23.568361 waagent[2114]: Try `iptables -h' or 'iptables --help' for more information.) Apr 16 23:29:23.568448 waagent[2114]: 2026-04-16T23:29:23.568395Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C3A6E5E4-3E5A-4BDF-B705-723BD4C3557B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Apr 16 23:29:23.600144 waagent[2114]: 2026-04-16T23:29:23.600111Z INFO MonitorHandler ExtHandler Network interfaces: Apr 16 23:29:23.600144 waagent[2114]: Executing ['ip', '-a', '-o', 'link']: Apr 16 23:29:23.600144 waagent[2114]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 16 23:29:23.600144 waagent[2114]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:bb:9d brd ff:ff:ff:ff:ff:ff Apr 16 23:29:23.600144 waagent[2114]: 3: enP20982s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:bb:9d brd ff:ff:ff:ff:ff:ff\ altname enP20982p0s2 Apr 16 23:29:23.600144 waagent[2114]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 16 23:29:23.600144 waagent[2114]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 16 23:29:23.600144 waagent[2114]: 2: eth0 inet 10.0.0.6/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 16 23:29:23.600144 waagent[2114]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 16 23:29:23.600144 waagent[2114]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 16 23:29:23.600144 waagent[2114]: 2: eth0 inet6 fe80::7eed:8dff:fed0:bb9d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 16 23:29:23.667318 waagent[2114]: 2026-04-16T23:29:23.667274Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Apr 16 23:29:23.667318 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.667318 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.667318 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.667318 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.667318 waagent[2114]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.667318 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.667318 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 16 23:29:23.667318 waagent[2114]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 16 23:29:23.667318 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 16 23:29:23.670170 waagent[2114]: 2026-04-16T23:29:23.670127Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 16 23:29:23.670170 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.670170 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.670170 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.670170 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.670170 waagent[2114]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 16 23:29:23.670170 waagent[2114]: pkts bytes target prot opt in out source destination Apr 16 23:29:23.670170 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 16 23:29:23.670170 waagent[2114]: 14 1463 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 16 23:29:23.670170 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 16 23:29:23.670320 waagent[2114]: 2026-04-16T23:29:23.670308Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 16 23:29:29.808522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:29:29.810194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:29.917142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:29.919847 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:30.040696 kubelet[2263]: E0416 23:29:30.040657 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:30.043097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:30.043200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:30.044564 systemd[1]: kubelet.service: Consumed 104ms CPU time, 105.7M memory peak. Apr 16 23:29:40.058546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:29:40.059747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:40.149308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:40.161671 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:40.268448 kubelet[2278]: E0416 23:29:40.268388 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:40.270386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:40.270596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:40.272554 systemd[1]: kubelet.service: Consumed 99ms CPU time, 107.2M memory peak. Apr 16 23:29:42.306239 chronyd[1845]: Selected source PHC0 Apr 16 23:29:43.180566 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:29:43.181778 systemd[1]: Started sshd@0-10.0.0.6:22-20.229.252.112:38212.service - OpenSSH per-connection server daemon (20.229.252.112:38212). Apr 16 23:29:44.110521 sshd[2286]: Accepted publickey for core from 20.229.252.112 port 38212 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:44.111281 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:44.114551 systemd-logind[1868]: New session 3 of user core. Apr 16 23:29:44.121770 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:29:44.711253 systemd[1]: Started sshd@1-10.0.0.6:22-20.229.252.112:38222.service - OpenSSH per-connection server daemon (20.229.252.112:38222). Apr 16 23:29:45.481234 sshd[2292]: Accepted publickey for core from 20.229.252.112 port 38222 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:45.484279 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:45.487808 systemd-logind[1868]: New session 4 of user core. Apr 16 23:29:45.494600 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:29:45.928921 sshd[2295]: Connection closed by 20.229.252.112 port 38222 Apr 16 23:29:45.928044 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:45.930658 systemd[1]: sshd@1-10.0.0.6:22-20.229.252.112:38222.service: Deactivated successfully. Apr 16 23:29:45.931906 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:29:45.932644 systemd-logind[1868]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:29:45.934247 systemd-logind[1868]: Removed session 4. Apr 16 23:29:46.087671 systemd[1]: Started sshd@2-10.0.0.6:22-20.229.252.112:43690.service - OpenSSH per-connection server daemon (20.229.252.112:43690). Apr 16 23:29:46.855387 sshd[2301]: Accepted publickey for core from 20.229.252.112 port 43690 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:46.856081 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:46.859309 systemd-logind[1868]: New session 5 of user core. Apr 16 23:29:46.869757 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:29:47.297439 sshd[2304]: Connection closed by 20.229.252.112 port 43690 Apr 16 23:29:47.297287 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:47.300201 systemd-logind[1868]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:29:47.300334 systemd[1]: sshd@2-10.0.0.6:22-20.229.252.112:43690.service: Deactivated successfully. Apr 16 23:29:47.301648 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:29:47.303249 systemd-logind[1868]: Removed session 5. Apr 16 23:29:47.453974 systemd[1]: Started sshd@3-10.0.0.6:22-20.229.252.112:43700.service - OpenSSH per-connection server daemon (20.229.252.112:43700). Apr 16 23:29:48.226736 sshd[2310]: Accepted publickey for core from 20.229.252.112 port 43700 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:48.227794 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:48.231397 systemd-logind[1868]: New session 6 of user core. Apr 16 23:29:48.235587 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:29:48.672849 sshd[2313]: Connection closed by 20.229.252.112 port 43700 Apr 16 23:29:48.673404 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:48.676330 systemd[1]: sshd@3-10.0.0.6:22-20.229.252.112:43700.service: Deactivated successfully. Apr 16 23:29:48.677647 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:29:48.678876 systemd-logind[1868]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:29:48.679998 systemd-logind[1868]: Removed session 6. Apr 16 23:29:48.829924 systemd[1]: Started sshd@4-10.0.0.6:22-20.229.252.112:43710.service - OpenSSH per-connection server daemon (20.229.252.112:43710). Apr 16 23:29:49.601795 sshd[2319]: Accepted publickey for core from 20.229.252.112 port 43710 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:49.602811 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:49.606328 systemd-logind[1868]: New session 7 of user core. Apr 16 23:29:49.615596 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:29:50.009105 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:29:50.009329 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:50.033123 sudo[2323]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:50.180947 sshd[2322]: Connection closed by 20.229.252.112 port 43710 Apr 16 23:29:50.181685 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:50.184740 systemd[1]: sshd@4-10.0.0.6:22-20.229.252.112:43710.service: Deactivated successfully. Apr 16 23:29:50.186213 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:29:50.186829 systemd-logind[1868]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:29:50.187824 systemd-logind[1868]: Removed session 7. Apr 16 23:29:50.308722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 23:29:50.310142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:50.335666 systemd[1]: Started sshd@5-10.0.0.6:22-20.229.252.112:43724.service - OpenSSH per-connection server daemon (20.229.252.112:43724). Apr 16 23:29:50.659765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:50.662721 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:50.687633 kubelet[2340]: E0416 23:29:50.687590 2340 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:50.689373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:50.689477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:50.691588 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.7M memory peak. Apr 16 23:29:51.083912 sshd[2332]: Accepted publickey for core from 20.229.252.112 port 43724 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:51.084348 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:51.087874 systemd-logind[1868]: New session 8 of user core. Apr 16 23:29:51.095775 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:29:51.372758 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:29:51.372962 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:51.375797 sudo[2349]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:51.379090 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:29:51.379275 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:51.385609 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:29:51.413525 augenrules[2371]: No rules Apr 16 23:29:51.414603 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:29:51.414768 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:29:51.415659 sudo[2348]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:51.559463 sshd[2347]: Connection closed by 20.229.252.112 port 43724 Apr 16 23:29:51.559374 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:51.563780 systemd-logind[1868]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:29:51.564346 systemd[1]: sshd@5-10.0.0.6:22-20.229.252.112:43724.service: Deactivated successfully. Apr 16 23:29:51.565650 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:29:51.567521 systemd-logind[1868]: Removed session 8. Apr 16 23:29:51.714687 systemd[1]: Started sshd@6-10.0.0.6:22-20.229.252.112:43732.service - OpenSSH per-connection server daemon (20.229.252.112:43732). Apr 16 23:29:52.468167 sshd[2380]: Accepted publickey for core from 20.229.252.112 port 43732 ssh2: RSA SHA256:fvHlpmSRr9xlBaioEm1WY33AfqcjA/cHUSgMzJZoCbQ Apr 16 23:29:52.469210 sshd-session[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:52.473020 systemd-logind[1868]: New session 9 of user core. Apr 16 23:29:52.474601 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:29:52.759175 sudo[2384]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:29:52.759384 sudo[2384]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:54.574606 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:29:54.585718 (dockerd)[2402]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:29:55.534139 dockerd[2402]: time="2026-04-16T23:29:55.533893301Z" level=info msg="Starting up" Apr 16 23:29:55.539295 dockerd[2402]: time="2026-04-16T23:29:55.539273251Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:29:55.546454 dockerd[2402]: time="2026-04-16T23:29:55.546420314Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:29:55.578532 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3844546377-merged.mount: Deactivated successfully. Apr 16 23:29:55.670627 dockerd[2402]: time="2026-04-16T23:29:55.670598976Z" level=info msg="Loading containers: start." Apr 16 23:29:55.683516 kernel: Initializing XFRM netlink socket Apr 16 23:29:56.005925 systemd-networkd[1480]: docker0: Link UP Apr 16 23:29:56.029043 dockerd[2402]: time="2026-04-16T23:29:56.028614926Z" level=info msg="Loading containers: done." Apr 16 23:29:56.051947 dockerd[2402]: time="2026-04-16T23:29:56.051915656Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:29:56.052135 dockerd[2402]: time="2026-04-16T23:29:56.052118981Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:29:56.052308 dockerd[2402]: time="2026-04-16T23:29:56.052258544Z" level=info msg="Initializing buildkit" Apr 16 23:29:56.102460 dockerd[2402]: time="2026-04-16T23:29:56.102430696Z" level=info msg="Completed buildkit initialization" Apr 16 23:29:56.106254 dockerd[2402]: time="2026-04-16T23:29:56.106228289Z" level=info msg="Daemon has completed initialization" Apr 16 23:29:56.106415 dockerd[2402]: time="2026-04-16T23:29:56.106366380Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:29:56.108818 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:29:56.470859 containerd[1891]: time="2026-04-16T23:29:56.470819144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 23:29:56.575589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1548236813-merged.mount: Deactivated successfully. Apr 16 23:29:57.327510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323953151.mount: Deactivated successfully. Apr 16 23:29:58.463250 containerd[1891]: time="2026-04-16T23:29:58.463193279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:58.470333 containerd[1891]: time="2026-04-16T23:29:58.470294978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193768" Apr 16 23:29:58.473675 containerd[1891]: time="2026-04-16T23:29:58.473649171Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:58.478865 containerd[1891]: time="2026-04-16T23:29:58.478837801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:58.479416 containerd[1891]: time="2026-04-16T23:29:58.479395294Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 2.008540069s" Apr 16 23:29:58.479434 containerd[1891]: time="2026-04-16T23:29:58.479425943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 16 23:29:58.480067 containerd[1891]: time="2026-04-16T23:29:58.480036102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 23:29:59.564194 containerd[1891]: time="2026-04-16T23:29:59.564136671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:59.568259 containerd[1891]: time="2026-04-16T23:29:59.568231090Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901444" Apr 16 23:29:59.571854 containerd[1891]: time="2026-04-16T23:29:59.571831481Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:59.578416 containerd[1891]: time="2026-04-16T23:29:59.578380343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:59.578853 containerd[1891]: time="2026-04-16T23:29:59.578742728Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 1.098686458s" Apr 16 23:29:59.578853 containerd[1891]: time="2026-04-16T23:29:59.578765760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 16 23:29:59.579248 containerd[1891]: time="2026-04-16T23:29:59.579204691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 23:30:00.808604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 23:30:00.810023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:30:00.909557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:00.918711 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:30:01.102438 kubelet[2678]: E0416 23:30:01.102295 2678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:30:01.104442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:30:01.104674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:30:01.106567 systemd[1]: kubelet.service: Consumed 108ms CPU time, 104.9M memory peak. Apr 16 23:30:01.598711 containerd[1891]: time="2026-04-16T23:30:01.598656504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:01.602507 containerd[1891]: time="2026-04-16T23:30:01.602317745Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047945" Apr 16 23:30:01.606083 containerd[1891]: time="2026-04-16T23:30:01.606058851Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:01.610917 containerd[1891]: time="2026-04-16T23:30:01.610884295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:01.611782 containerd[1891]: time="2026-04-16T23:30:01.611577528Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 2.032350629s" Apr 16 23:30:01.611782 containerd[1891]: time="2026-04-16T23:30:01.611602353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 16 23:30:01.612005 containerd[1891]: time="2026-04-16T23:30:01.611986338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 23:30:01.952207 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 16 23:30:03.100145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644615051.mount: Deactivated successfully. Apr 16 23:30:03.316689 containerd[1891]: time="2026-04-16T23:30:03.316223569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:03.319600 containerd[1891]: time="2026-04-16T23:30:03.319576282Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606286" Apr 16 23:30:03.324118 containerd[1891]: time="2026-04-16T23:30:03.324091919Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:03.327974 containerd[1891]: time="2026-04-16T23:30:03.327944204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:03.328372 containerd[1891]: time="2026-04-16T23:30:03.328347246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.716338836s" Apr 16 23:30:03.328448 containerd[1891]: time="2026-04-16T23:30:03.328436432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 16 23:30:03.328861 containerd[1891]: time="2026-04-16T23:30:03.328835226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 23:30:04.015951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132881409.mount: Deactivated successfully. Apr 16 23:30:04.157817 update_engine[1873]: I20260416 23:30:04.157752 1873 update_attempter.cc:509] Updating boot flags... Apr 16 23:30:05.054346 containerd[1891]: time="2026-04-16T23:30:05.054301056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.057198 containerd[1891]: time="2026-04-16T23:30:05.057168550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Apr 16 23:30:05.061053 containerd[1891]: time="2026-04-16T23:30:05.061013538Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.065800 containerd[1891]: time="2026-04-16T23:30:05.065762333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.066405 containerd[1891]: time="2026-04-16T23:30:05.066243161Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.737383126s" Apr 16 23:30:05.066405 containerd[1891]: time="2026-04-16T23:30:05.066271513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 16 23:30:05.067015 containerd[1891]: time="2026-04-16T23:30:05.066992243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 23:30:05.662447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873026546.mount: Deactivated successfully. Apr 16 23:30:05.686095 containerd[1891]: time="2026-04-16T23:30:05.686031389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.689499 containerd[1891]: time="2026-04-16T23:30:05.689446152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 16 23:30:05.693676 containerd[1891]: time="2026-04-16T23:30:05.693624997Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.698143 containerd[1891]: time="2026-04-16T23:30:05.698090832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:05.698773 containerd[1891]: time="2026-04-16T23:30:05.698421584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 631.380861ms" Apr 16 23:30:05.698773 containerd[1891]: time="2026-04-16T23:30:05.698451617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 16 23:30:05.698986 containerd[1891]: time="2026-04-16T23:30:05.698856603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 23:30:06.449235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323560309.mount: Deactivated successfully. Apr 16 23:30:07.931649 containerd[1891]: time="2026-04-16T23:30:07.930993357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:07.935360 containerd[1891]: time="2026-04-16T23:30:07.935332299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139658" Apr 16 23:30:07.941528 containerd[1891]: time="2026-04-16T23:30:07.941503389Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:07.951382 containerd[1891]: time="2026-04-16T23:30:07.951341991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:07.952103 containerd[1891]: time="2026-04-16T23:30:07.951903321Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.253019942s" Apr 16 23:30:07.952103 containerd[1891]: time="2026-04-16T23:30:07.952016941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 16 23:30:09.947058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:09.947177 systemd[1]: kubelet.service: Consumed 108ms CPU time, 104.9M memory peak. Apr 16 23:30:09.948838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:30:09.974528 systemd[1]: Reload requested from client PID 2907 ('systemctl') (unit session-9.scope)... Apr 16 23:30:09.974539 systemd[1]: Reloading... Apr 16 23:30:10.057515 zram_generator::config[2960]: No configuration found. Apr 16 23:30:10.218627 systemd[1]: Reloading finished in 243 ms. Apr 16 23:30:10.258882 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:30:10.258941 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:30:10.259133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:10.259172 systemd[1]: kubelet.service: Consumed 77ms CPU time, 94.9M memory peak. Apr 16 23:30:10.260324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:30:10.508088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:10.519797 (kubelet)[3021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:30:10.546400 kubelet[3021]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:30:10.546400 kubelet[3021]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:30:10.622839 kubelet[3021]: I0416 23:30:10.622762 3021 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:30:10.995640 kubelet[3021]: I0416 23:30:10.995602 3021 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:30:10.995774 kubelet[3021]: I0416 23:30:10.995766 3021 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:30:10.995853 kubelet[3021]: I0416 23:30:10.995847 3021 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:30:10.995888 kubelet[3021]: I0416 23:30:10.995881 3021 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:30:10.996118 kubelet[3021]: I0416 23:30:10.996105 3021 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:30:11.028416 kubelet[3021]: I0416 23:30:11.028370 3021 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:30:11.028634 kubelet[3021]: E0416 23:30:11.028612 3021 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:30:11.032210 kubelet[3021]: I0416 23:30:11.032167 3021 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:30:11.034662 kubelet[3021]: I0416 23:30:11.034644 3021 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:30:11.034833 kubelet[3021]: I0416 23:30:11.034812 3021 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:30:11.034941 kubelet[3021]: I0416 23:30:11.034832 3021 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-b3358a4beb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:30:11.034941 kubelet[3021]: I0416 23:30:11.034940 3021 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:30:11.035036 kubelet[3021]: I0416 23:30:11.034948 3021 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:30:11.035053 kubelet[3021]: I0416 23:30:11.035046 3021 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:30:11.041131 kubelet[3021]: I0416 23:30:11.041109 3021 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:30:11.042246 kubelet[3021]: I0416 23:30:11.042228 3021 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:30:11.042290 kubelet[3021]: I0416 23:30:11.042258 3021 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:30:11.042858 kubelet[3021]: E0416 23:30:11.042819 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-b3358a4beb&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:30:11.042936 kubelet[3021]: I0416 23:30:11.042876 3021 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:30:11.042936 kubelet[3021]: I0416 23:30:11.042893 3021 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:30:11.044395 kubelet[3021]: E0416 23:30:11.043552 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:30:11.044395 kubelet[3021]: I0416 23:30:11.043898 3021 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:30:11.044865 kubelet[3021]: I0416 23:30:11.044835 3021 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:30:11.044865 kubelet[3021]: I0416 23:30:11.044867 3021 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:30:11.044934 kubelet[3021]: W0416 23:30:11.044913 3021 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:30:11.049984 kubelet[3021]: I0416 23:30:11.049966 3021 server.go:1262] "Started kubelet" Apr 16 23:30:11.050933 kubelet[3021]: I0416 23:30:11.050886 3021 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:30:11.051657 kubelet[3021]: I0416 23:30:11.051639 3021 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:30:11.051916 kubelet[3021]: I0416 23:30:11.051890 3021 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:30:11.054085 kubelet[3021]: I0416 23:30:11.054038 3021 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:30:11.054274 kubelet[3021]: I0416 23:30:11.054259 3021 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:30:11.054483 kubelet[3021]: I0416 23:30:11.054469 3021 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:30:11.057309 kubelet[3021]: I0416 23:30:11.057269 3021 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:30:11.058112 kubelet[3021]: I0416 23:30:11.058088 3021 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:30:11.060298 kubelet[3021]: I0416 23:30:11.060275 3021 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:30:11.060502 kubelet[3021]: E0416 23:30:11.060439 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:11.060638 kubelet[3021]: E0416 23:30:11.059331 3021 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-b3358a4beb.18a6fa3592f4529c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-b3358a4beb,UID:ci-4459.2.4-n-b3358a4beb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-b3358a4beb,},FirstTimestamp:2026-04-16 23:30:11.049935516 +0000 UTC m=+0.527315899,LastTimestamp:2026-04-16 23:30:11.049935516 +0000 UTC m=+0.527315899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-b3358a4beb,}" Apr 16 23:30:11.060863 kubelet[3021]: E0416 23:30:11.060843 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-b3358a4beb?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 16 23:30:11.062807 kubelet[3021]: E0416 23:30:11.062792 3021 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:30:11.063047 kubelet[3021]: I0416 23:30:11.063031 3021 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:30:11.063111 kubelet[3021]: I0416 23:30:11.063104 3021 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:30:11.063211 kubelet[3021]: I0416 23:30:11.063199 3021 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:30:11.067230 kubelet[3021]: I0416 23:30:11.067203 3021 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:30:11.067289 kubelet[3021]: I0416 23:30:11.067263 3021 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:30:11.078425 kubelet[3021]: E0416 23:30:11.077357 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:30:11.081606 kubelet[3021]: I0416 23:30:11.081583 3021 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:30:11.081954 kubelet[3021]: I0416 23:30:11.081933 3021 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:30:11.082104 kubelet[3021]: I0416 23:30:11.082094 3021 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:30:11.088151 kubelet[3021]: I0416 23:30:11.088132 3021 policy_none.go:49] "None policy: Start" Apr 16 23:30:11.088231 kubelet[3021]: I0416 23:30:11.088224 3021 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:30:11.088291 kubelet[3021]: I0416 23:30:11.088275 3021 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:30:11.089491 kubelet[3021]: I0416 23:30:11.089448 3021 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:30:11.089491 kubelet[3021]: I0416 23:30:11.089478 3021 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:30:11.089660 kubelet[3021]: I0416 23:30:11.089508 3021 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:30:11.089660 kubelet[3021]: E0416 23:30:11.089545 3021 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:30:11.090073 kubelet[3021]: E0416 23:30:11.090024 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:30:11.094834 kubelet[3021]: I0416 23:30:11.094814 3021 policy_none.go:47] "Start" Apr 16 23:30:11.098925 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:30:11.107576 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:30:11.110887 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:30:11.121508 kubelet[3021]: E0416 23:30:11.121220 3021 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:30:11.121508 kubelet[3021]: I0416 23:30:11.121439 3021 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:30:11.121508 kubelet[3021]: I0416 23:30:11.121450 3021 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:30:11.122133 kubelet[3021]: I0416 23:30:11.122109 3021 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:30:11.124290 kubelet[3021]: E0416 23:30:11.124267 3021 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:30:11.124406 kubelet[3021]: E0416 23:30:11.124305 3021 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:11.202719 systemd[1]: Created slice kubepods-burstable-podf9c728cb20acad00e7b3626434a83c7d.slice - libcontainer container kubepods-burstable-podf9c728cb20acad00e7b3626434a83c7d.slice. Apr 16 23:30:11.214009 kubelet[3021]: E0416 23:30:11.213963 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.218068 systemd[1]: Created slice kubepods-burstable-pod459c7999ddda1b5b37ff1d0a71ab29ca.slice - libcontainer container kubepods-burstable-pod459c7999ddda1b5b37ff1d0a71ab29ca.slice. Apr 16 23:30:11.220304 kubelet[3021]: E0416 23:30:11.220203 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.224405 systemd[1]: Created slice kubepods-burstable-pod6e1ab963f719d594f7d00dbfb5e3704d.slice - libcontainer container kubepods-burstable-pod6e1ab963f719d594f7d00dbfb5e3704d.slice. Apr 16 23:30:11.226165 kubelet[3021]: I0416 23:30:11.226123 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.226917 kubelet[3021]: E0416 23:30:11.226890 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.226992 kubelet[3021]: E0416 23:30:11.226965 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.261541 kubelet[3021]: E0416 23:30:11.261396 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-b3358a4beb?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 16 23:30:11.268417 kubelet[3021]: I0416 23:30:11.268384 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268417 kubelet[3021]: I0416 23:30:11.268417 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268546 kubelet[3021]: I0416 23:30:11.268434 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e1ab963f719d594f7d00dbfb5e3704d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-b3358a4beb\" (UID: \"6e1ab963f719d594f7d00dbfb5e3704d\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268546 kubelet[3021]: I0416 23:30:11.268448 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268546 kubelet[3021]: I0416 23:30:11.268458 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268546 kubelet[3021]: I0416 23:30:11.268468 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268546 kubelet[3021]: I0416 23:30:11.268478 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268628 kubelet[3021]: I0416 23:30:11.268497 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.268628 kubelet[3021]: I0416 23:30:11.268506 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.428685 kubelet[3021]: I0416 23:30:11.428644 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.428970 kubelet[3021]: E0416 23:30:11.428947 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.522262 containerd[1891]: time="2026-04-16T23:30:11.522157206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-b3358a4beb,Uid:f9c728cb20acad00e7b3626434a83c7d,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:11.532339 containerd[1891]: time="2026-04-16T23:30:11.532135284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-b3358a4beb,Uid:459c7999ddda1b5b37ff1d0a71ab29ca,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:11.537831 containerd[1891]: time="2026-04-16T23:30:11.537802997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-b3358a4beb,Uid:6e1ab963f719d594f7d00dbfb5e3704d,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:11.662591 kubelet[3021]: E0416 23:30:11.662541 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-b3358a4beb?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 16 23:30:11.830833 kubelet[3021]: I0416 23:30:11.830801 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:11.831559 kubelet[3021]: E0416 23:30:11.831535 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:12.004756 kubelet[3021]: E0416 23:30:12.004710 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-b3358a4beb&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:30:12.463887 kubelet[3021]: E0416 23:30:12.463839 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-b3358a4beb?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 16 23:30:12.480699 kubelet[3021]: E0416 23:30:12.480643 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:30:12.598093 kubelet[3021]: E0416 23:30:12.598049 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:30:12.628690 kubelet[3021]: E0416 23:30:12.628644 3021 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:30:12.632891 kubelet[3021]: I0416 23:30:12.632867 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:12.633126 kubelet[3021]: E0416 23:30:12.633099 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:12.941183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029449070.mount: Deactivated successfully. Apr 16 23:30:12.972563 containerd[1891]: time="2026-04-16T23:30:12.972510389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:30:12.980475 containerd[1891]: time="2026-04-16T23:30:12.980425471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 16 23:30:12.993198 containerd[1891]: time="2026-04-16T23:30:12.992710868Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:30:12.997872 containerd[1891]: time="2026-04-16T23:30:12.997833890Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:30:13.001173 containerd[1891]: time="2026-04-16T23:30:13.001140339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:30:13.011598 containerd[1891]: time="2026-04-16T23:30:13.011546586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:30:13.012156 containerd[1891]: time="2026-04-16T23:30:13.012123777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.480802168s" Apr 16 23:30:13.015511 containerd[1891]: time="2026-04-16T23:30:13.015145443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:30:13.018530 containerd[1891]: time="2026-04-16T23:30:13.018504661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:30:13.026190 containerd[1891]: time="2026-04-16T23:30:13.026147185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.482536199s" Apr 16 23:30:13.047300 containerd[1891]: time="2026-04-16T23:30:13.047234830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.507509875s" Apr 16 23:30:13.100933 containerd[1891]: time="2026-04-16T23:30:13.100851458Z" level=info msg="connecting to shim feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17" address="unix:///run/containerd/s/a599153719b85bb84421c4205023063d2d9615b4907c32c4843e14e208e485a6" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:13.114286 containerd[1891]: time="2026-04-16T23:30:13.113898122Z" level=info msg="connecting to shim afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e" address="unix:///run/containerd/s/78e4c718f7d1a9d56145c69e14ab6aa2e8592be701298455113828163f8e386c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:13.127659 systemd[1]: Started cri-containerd-feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17.scope - libcontainer container feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17. Apr 16 23:30:13.129199 containerd[1891]: time="2026-04-16T23:30:13.129169281Z" level=info msg="connecting to shim cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4" address="unix:///run/containerd/s/bf2dbc0d30a2a4c9eda9a2974978bc96e65257e0e3886172249c777438d4ba3a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:13.143681 systemd[1]: Started cri-containerd-afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e.scope - libcontainer container afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e. Apr 16 23:30:13.152114 kubelet[3021]: E0416 23:30:13.152086 3021 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:30:13.153985 systemd[1]: Started cri-containerd-cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4.scope - libcontainer container cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4. Apr 16 23:30:13.197909 containerd[1891]: time="2026-04-16T23:30:13.197721931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-b3358a4beb,Uid:f9c728cb20acad00e7b3626434a83c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17\"" Apr 16 23:30:13.202056 containerd[1891]: time="2026-04-16T23:30:13.201964587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-b3358a4beb,Uid:6e1ab963f719d594f7d00dbfb5e3704d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e\"" Apr 16 23:30:13.211001 containerd[1891]: time="2026-04-16T23:30:13.210945871Z" level=info msg="CreateContainer within sandbox \"feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:30:13.215777 containerd[1891]: time="2026-04-16T23:30:13.215740653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-b3358a4beb,Uid:459c7999ddda1b5b37ff1d0a71ab29ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4\"" Apr 16 23:30:13.216863 containerd[1891]: time="2026-04-16T23:30:13.216658115Z" level=info msg="CreateContainer within sandbox \"afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:30:13.226962 containerd[1891]: time="2026-04-16T23:30:13.226925279Z" level=info msg="CreateContainer within sandbox \"cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:30:13.244322 containerd[1891]: time="2026-04-16T23:30:13.244273497Z" level=info msg="Container 238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:13.269380 containerd[1891]: time="2026-04-16T23:30:13.269324568Z" level=info msg="Container e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:13.286659 containerd[1891]: time="2026-04-16T23:30:13.286560119Z" level=info msg="Container 4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:13.296783 containerd[1891]: time="2026-04-16T23:30:13.296741072Z" level=info msg="CreateContainer within sandbox \"feaae885a1acd0df59df1a0d2dc7940eef19a40cdc9d18f229f7957c65366b17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5\"" Apr 16 23:30:13.298170 containerd[1891]: time="2026-04-16T23:30:13.298127602Z" level=info msg="StartContainer for \"238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5\"" Apr 16 23:30:13.299192 containerd[1891]: time="2026-04-16T23:30:13.299132227Z" level=info msg="connecting to shim 238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5" address="unix:///run/containerd/s/a599153719b85bb84421c4205023063d2d9615b4907c32c4843e14e208e485a6" protocol=ttrpc version=3 Apr 16 23:30:13.312082 containerd[1891]: time="2026-04-16T23:30:13.312041016Z" level=info msg="CreateContainer within sandbox \"afabf8a8a9fcfcd44238f09d9b1e6c174c334aef6853847930d4a1637e3b710e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9\"" Apr 16 23:30:13.312757 containerd[1891]: time="2026-04-16T23:30:13.312728241Z" level=info msg="StartContainer for \"e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9\"" Apr 16 23:30:13.313787 containerd[1891]: time="2026-04-16T23:30:13.313761562Z" level=info msg="connecting to shim e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9" address="unix:///run/containerd/s/78e4c718f7d1a9d56145c69e14ab6aa2e8592be701298455113828163f8e386c" protocol=ttrpc version=3 Apr 16 23:30:13.314693 systemd[1]: Started cri-containerd-238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5.scope - libcontainer container 238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5. Apr 16 23:30:13.323887 containerd[1891]: time="2026-04-16T23:30:13.323844113Z" level=info msg="CreateContainer within sandbox \"cbb8a3610b94779934c7a356f04df9692d3ee9b422f7be5e04a2a2af874d30e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b\"" Apr 16 23:30:13.326257 containerd[1891]: time="2026-04-16T23:30:13.326160066Z" level=info msg="StartContainer for \"4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b\"" Apr 16 23:30:13.328175 containerd[1891]: time="2026-04-16T23:30:13.328079993Z" level=info msg="connecting to shim 4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b" address="unix:///run/containerd/s/bf2dbc0d30a2a4c9eda9a2974978bc96e65257e0e3886172249c777438d4ba3a" protocol=ttrpc version=3 Apr 16 23:30:13.334738 systemd[1]: Started cri-containerd-e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9.scope - libcontainer container e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9. Apr 16 23:30:13.358721 systemd[1]: Started cri-containerd-4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b.scope - libcontainer container 4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b. Apr 16 23:30:13.370946 containerd[1891]: time="2026-04-16T23:30:13.370897420Z" level=info msg="StartContainer for \"238af7c8361e9d581c2ed1e060b29dc8e490d32d568070f15bd3aa94dfc011c5\" returns successfully" Apr 16 23:30:13.404644 containerd[1891]: time="2026-04-16T23:30:13.404596031Z" level=info msg="StartContainer for \"e3e612cc796d4b3f65d2681601c0875fba4833f283d1d17fc8468f64a61d0da9\" returns successfully" Apr 16 23:30:13.417259 containerd[1891]: time="2026-04-16T23:30:13.417202732Z" level=info msg="StartContainer for \"4cecf2363a5f46fd9cd4e0a08c08e1a2f27550fc862ca9f9195423bc48edc17b\" returns successfully" Apr 16 23:30:14.109411 kubelet[3021]: E0416 23:30:14.109372 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.112713 kubelet[3021]: E0416 23:30:14.112682 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.115167 kubelet[3021]: E0416 23:30:14.115132 3021 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.235811 kubelet[3021]: I0416 23:30:14.235779 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.341433 kubelet[3021]: E0416 23:30:14.341394 3021 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-b3358a4beb\" not found" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.530515 kubelet[3021]: I0416 23:30:14.530194 3021 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:14.530515 kubelet[3021]: E0416 23:30:14.530232 3021 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-b3358a4beb\": node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:14.623553 kubelet[3021]: E0416 23:30:14.623516 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:14.724067 kubelet[3021]: E0416 23:30:14.724020 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:14.824593 kubelet[3021]: E0416 23:30:14.824551 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:14.925209 kubelet[3021]: E0416 23:30:14.925166 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:15.025690 kubelet[3021]: E0416 23:30:15.025650 3021 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-b3358a4beb\" not found" Apr 16 23:30:15.116481 kubelet[3021]: I0416 23:30:15.116371 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.116481 kubelet[3021]: I0416 23:30:15.116416 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.121460 kubelet[3021]: E0416 23:30:15.121420 3021 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.121866 kubelet[3021]: E0416 23:30:15.121843 3021 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-b3358a4beb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.161603 kubelet[3021]: I0416 23:30:15.161457 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.164402 kubelet[3021]: E0416 23:30:15.164358 3021 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-b3358a4beb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.164402 kubelet[3021]: I0416 23:30:15.164398 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.165638 kubelet[3021]: E0416 23:30:15.165601 3021 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.165638 kubelet[3021]: I0416 23:30:15.165621 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:15.166939 kubelet[3021]: E0416 23:30:15.166918 3021 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:16.046744 kubelet[3021]: I0416 23:30:16.046693 3021 apiserver.go:52] "Watching apiserver" Apr 16 23:30:16.068147 kubelet[3021]: I0416 23:30:16.068078 3021 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:30:16.117921 kubelet[3021]: I0416 23:30:16.117866 3021 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:16.127480 kubelet[3021]: I0416 23:30:16.127435 3021 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 23:30:16.564615 systemd[1]: Reload requested from client PID 3302 ('systemctl') (unit session-9.scope)... Apr 16 23:30:16.564631 systemd[1]: Reloading... Apr 16 23:30:16.646539 zram_generator::config[3352]: No configuration found. Apr 16 23:30:16.815430 systemd[1]: Reloading finished in 250 ms. Apr 16 23:30:16.834122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:30:16.844906 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:30:16.845270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:16.845380 systemd[1]: kubelet.service: Consumed 685ms CPU time, 121.2M memory peak. Apr 16 23:30:16.847879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:30:17.004381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:30:17.012806 (kubelet)[3413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:30:17.045779 kubelet[3413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:30:17.045779 kubelet[3413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:30:17.045779 kubelet[3413]: I0416 23:30:17.045686 3413 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:30:17.051786 kubelet[3413]: I0416 23:30:17.051747 3413 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 23:30:17.051786 kubelet[3413]: I0416 23:30:17.051776 3413 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:30:17.051786 kubelet[3413]: I0416 23:30:17.051801 3413 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:30:17.051957 kubelet[3413]: I0416 23:30:17.051806 3413 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:30:17.052010 kubelet[3413]: I0416 23:30:17.051991 3413 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:30:17.053367 kubelet[3413]: I0416 23:30:17.053340 3413 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:30:17.055990 kubelet[3413]: I0416 23:30:17.055859 3413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:30:17.060443 kubelet[3413]: I0416 23:30:17.060423 3413 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:30:17.063312 kubelet[3413]: I0416 23:30:17.063268 3413 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:30:17.063546 kubelet[3413]: I0416 23:30:17.063441 3413 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:30:17.063649 kubelet[3413]: I0416 23:30:17.063466 3413 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-b3358a4beb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:30:17.063727 kubelet[3413]: I0416 23:30:17.063649 3413 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:30:17.063727 kubelet[3413]: I0416 23:30:17.063657 3413 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 23:30:17.063727 kubelet[3413]: I0416 23:30:17.063679 3413 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:30:17.064057 kubelet[3413]: I0416 23:30:17.064032 3413 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:30:17.064184 kubelet[3413]: I0416 23:30:17.064170 3413 kubelet.go:475] "Attempting to sync node with API server" Apr 16 23:30:17.064184 kubelet[3413]: I0416 23:30:17.064183 3413 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:30:17.064237 kubelet[3413]: I0416 23:30:17.064214 3413 kubelet.go:387] "Adding apiserver pod source" Apr 16 23:30:17.064237 kubelet[3413]: I0416 23:30:17.064228 3413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:30:17.068129 kubelet[3413]: I0416 23:30:17.067950 3413 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:30:17.073253 kubelet[3413]: I0416 23:30:17.068729 3413 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:30:17.073253 kubelet[3413]: I0416 23:30:17.068752 3413 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:30:17.075151 kubelet[3413]: I0416 23:30:17.075121 3413 server.go:1262] "Started kubelet" Apr 16 23:30:17.076955 kubelet[3413]: I0416 23:30:17.076928 3413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:30:17.089972 kubelet[3413]: I0416 23:30:17.089642 3413 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:30:17.092182 kubelet[3413]: I0416 23:30:17.090300 3413 server.go:310] "Adding debug handlers to kubelet server" Apr 16 23:30:17.093541 kubelet[3413]: I0416 23:30:17.093246 3413 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:30:17.093541 kubelet[3413]: I0416 23:30:17.093307 3413 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:30:17.098082 kubelet[3413]: E0416 23:30:17.098061 3413 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:30:17.099697 kubelet[3413]: I0416 23:30:17.099673 3413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:30:17.101306 kubelet[3413]: I0416 23:30:17.101282 3413 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 23:30:17.101818 kubelet[3413]: I0416 23:30:17.101738 3413 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:30:17.114167 kubelet[3413]: I0416 23:30:17.114029 3413 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:30:17.116366 kubelet[3413]: I0416 23:30:17.115972 3413 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:30:17.117734 kubelet[3413]: I0416 23:30:17.117687 3413 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:30:17.119701 kubelet[3413]: I0416 23:30:17.119683 3413 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:30:17.122179 kubelet[3413]: I0416 23:30:17.122127 3413 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:30:17.122344 kubelet[3413]: I0416 23:30:17.122187 3413 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 23:30:17.122344 kubelet[3413]: I0416 23:30:17.122212 3413 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 23:30:17.122344 kubelet[3413]: E0416 23:30:17.122248 3413 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:30:17.152814 kubelet[3413]: I0416 23:30:17.152786 3413 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:30:17.152814 kubelet[3413]: I0416 23:30:17.152802 3413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:30:17.152991 kubelet[3413]: I0416 23:30:17.152843 3413 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153189 3413 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153311 3413 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153357 3413 policy_none.go:49] "None policy: Start" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153385 3413 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153401 3413 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153555 3413 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 23:30:17.153649 kubelet[3413]: I0416 23:30:17.153565 3413 policy_none.go:47] "Start" Apr 16 23:30:17.158616 kubelet[3413]: E0416 23:30:17.158464 3413 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:30:17.223302 kubelet[3413]: E0416 23:30:17.223255 3413 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 23:30:17.326575 kubelet[3413]: I0416 23:30:17.326527 3413 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:30:17.326575 kubelet[3413]: I0416 23:30:17.326556 3413 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:30:17.327776 kubelet[3413]: I0416 23:30:17.327280 3413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:30:17.327859 kubelet[3413]: I0416 23:30:17.327822 3413 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:30:17.328997 kubelet[3413]: E0416 23:30:17.328849 3413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:30:17.329599 kubelet[3413]: I0416 23:30:17.329530 3413 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:30:17.424035 kubelet[3413]: I0416 23:30:17.423919 3413 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.424389 kubelet[3413]: I0416 23:30:17.424284 3413 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.425395 kubelet[3413]: I0416 23:30:17.425341 3413 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.434293 kubelet[3413]: I0416 23:30:17.434013 3413 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.457139 kubelet[3413]: I0416 23:30:17.457103 3413 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 23:30:17.457559 kubelet[3413]: E0416 23:30:17.457164 3413 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-b3358a4beb\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.457748 kubelet[3413]: I0416 23:30:17.457670 3413 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 23:30:17.457957 kubelet[3413]: I0416 23:30:17.457716 3413 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 23:30:17.472504 kubelet[3413]: I0416 23:30:17.472326 3413 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.472504 kubelet[3413]: I0416 23:30:17.472422 3413 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.528869 kubelet[3413]: I0416 23:30:17.528669 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.528869 kubelet[3413]: I0416 23:30:17.528704 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.528869 kubelet[3413]: I0416 23:30:17.528720 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.528869 kubelet[3413]: I0416 23:30:17.528732 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.528869 kubelet[3413]: I0416 23:30:17.528744 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9c728cb20acad00e7b3626434a83c7d-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-b3358a4beb\" (UID: \"f9c728cb20acad00e7b3626434a83c7d\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.529088 kubelet[3413]: I0416 23:30:17.528767 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.529088 kubelet[3413]: I0416 23:30:17.528775 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.529088 kubelet[3413]: I0416 23:30:17.528788 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/459c7999ddda1b5b37ff1d0a71ab29ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-b3358a4beb\" (UID: \"459c7999ddda1b5b37ff1d0a71ab29ca\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:17.529088 kubelet[3413]: I0416 23:30:17.528801 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e1ab963f719d594f7d00dbfb5e3704d-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-b3358a4beb\" (UID: \"6e1ab963f719d594f7d00dbfb5e3704d\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:18.066862 kubelet[3413]: I0416 23:30:18.066811 3413 apiserver.go:52] "Watching apiserver" Apr 16 23:30:18.102787 kubelet[3413]: I0416 23:30:18.102746 3413 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:30:18.158736 kubelet[3413]: I0416 23:30:18.158672 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-b3358a4beb" podStartSLOduration=2.15865698 podStartE2EDuration="2.15865698s" podCreationTimestamp="2026-04-16 23:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:18.158583322 +0000 UTC m=+1.142066583" watchObservedRunningTime="2026-04-16 23:30:18.15865698 +0000 UTC m=+1.142140241" Apr 16 23:30:18.195668 kubelet[3413]: I0416 23:30:18.195600 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-b3358a4beb" podStartSLOduration=1.195572238 podStartE2EDuration="1.195572238s" podCreationTimestamp="2026-04-16 23:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:18.182631617 +0000 UTC m=+1.166114886" watchObservedRunningTime="2026-04-16 23:30:18.195572238 +0000 UTC m=+1.179055507" Apr 16 23:30:18.196042 kubelet[3413]: I0416 23:30:18.195788 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-b3358a4beb" podStartSLOduration=1.195781612 podStartE2EDuration="1.195781612s" podCreationTimestamp="2026-04-16 23:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:18.195276126 +0000 UTC m=+1.178759387" watchObservedRunningTime="2026-04-16 23:30:18.195781612 +0000 UTC m=+1.179264889" Apr 16 23:30:23.352547 kubelet[3413]: I0416 23:30:23.352507 3413 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:30:23.353322 containerd[1891]: time="2026-04-16T23:30:23.353178053Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:30:23.353640 kubelet[3413]: I0416 23:30:23.353398 3413 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:30:23.937960 systemd[1]: Created slice kubepods-besteffort-pod4602bb67_b668_4769_88ff_5a72921efbbb.slice - libcontainer container kubepods-besteffort-pod4602bb67_b668_4769_88ff_5a72921efbbb.slice. Apr 16 23:30:23.965715 kubelet[3413]: I0416 23:30:23.965667 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4602bb67-b668-4769-88ff-5a72921efbbb-lib-modules\") pod \"kube-proxy-2kmnp\" (UID: \"4602bb67-b668-4769-88ff-5a72921efbbb\") " pod="kube-system/kube-proxy-2kmnp" Apr 16 23:30:23.965715 kubelet[3413]: I0416 23:30:23.965711 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgq6\" (UniqueName: \"kubernetes.io/projected/4602bb67-b668-4769-88ff-5a72921efbbb-kube-api-access-stgq6\") pod \"kube-proxy-2kmnp\" (UID: \"4602bb67-b668-4769-88ff-5a72921efbbb\") " pod="kube-system/kube-proxy-2kmnp" Apr 16 23:30:23.965715 kubelet[3413]: I0416 23:30:23.965729 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4602bb67-b668-4769-88ff-5a72921efbbb-xtables-lock\") pod \"kube-proxy-2kmnp\" (UID: \"4602bb67-b668-4769-88ff-5a72921efbbb\") " pod="kube-system/kube-proxy-2kmnp" Apr 16 23:30:23.965924 kubelet[3413]: I0416 23:30:23.965742 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4602bb67-b668-4769-88ff-5a72921efbbb-kube-proxy\") pod \"kube-proxy-2kmnp\" (UID: \"4602bb67-b668-4769-88ff-5a72921efbbb\") " pod="kube-system/kube-proxy-2kmnp" Apr 16 23:30:24.254309 containerd[1891]: time="2026-04-16T23:30:24.254058379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kmnp,Uid:4602bb67-b668-4769-88ff-5a72921efbbb,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:24.306061 containerd[1891]: time="2026-04-16T23:30:24.305999979Z" level=info msg="connecting to shim 761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962" address="unix:///run/containerd/s/2248c00bedde675a6f1cdb9887c826ed7ea92e9e7c77050ab40cdc1baffb33d5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:24.325667 systemd[1]: Started cri-containerd-761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962.scope - libcontainer container 761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962. Apr 16 23:30:24.351074 containerd[1891]: time="2026-04-16T23:30:24.351028742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kmnp,Uid:4602bb67-b668-4769-88ff-5a72921efbbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962\"" Apr 16 23:30:24.361219 containerd[1891]: time="2026-04-16T23:30:24.361177143Z" level=info msg="CreateContainer within sandbox \"761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:30:24.391966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168702852.mount: Deactivated successfully. Apr 16 23:30:24.393210 containerd[1891]: time="2026-04-16T23:30:24.392567789Z" level=info msg="Container 4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:24.418999 systemd[1]: Created slice kubepods-besteffort-pod6d6c9639_adff_47fe_bf72_b1bd597ea0d0.slice - libcontainer container kubepods-besteffort-pod6d6c9639_adff_47fe_bf72_b1bd597ea0d0.slice. Apr 16 23:30:24.429859 containerd[1891]: time="2026-04-16T23:30:24.429799276Z" level=info msg="CreateContainer within sandbox \"761744e8bf9a7f594401dc43d22e9af9485f51c928f1b2f63674201369731962\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f\"" Apr 16 23:30:24.430622 containerd[1891]: time="2026-04-16T23:30:24.430596793Z" level=info msg="StartContainer for \"4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f\"" Apr 16 23:30:24.431813 containerd[1891]: time="2026-04-16T23:30:24.431792632Z" level=info msg="connecting to shim 4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f" address="unix:///run/containerd/s/2248c00bedde675a6f1cdb9887c826ed7ea92e9e7c77050ab40cdc1baffb33d5" protocol=ttrpc version=3 Apr 16 23:30:24.447672 systemd[1]: Started cri-containerd-4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f.scope - libcontainer container 4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f. Apr 16 23:30:24.468599 kubelet[3413]: I0416 23:30:24.468514 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnm6\" (UniqueName: \"kubernetes.io/projected/6d6c9639-adff-47fe-bf72-b1bd597ea0d0-kube-api-access-2lnm6\") pod \"tigera-operator-5588576f44-sps9h\" (UID: \"6d6c9639-adff-47fe-bf72-b1bd597ea0d0\") " pod="tigera-operator/tigera-operator-5588576f44-sps9h" Apr 16 23:30:24.468599 kubelet[3413]: I0416 23:30:24.468557 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d6c9639-adff-47fe-bf72-b1bd597ea0d0-var-lib-calico\") pod \"tigera-operator-5588576f44-sps9h\" (UID: \"6d6c9639-adff-47fe-bf72-b1bd597ea0d0\") " pod="tigera-operator/tigera-operator-5588576f44-sps9h" Apr 16 23:30:24.506698 containerd[1891]: time="2026-04-16T23:30:24.506368904Z" level=info msg="StartContainer for \"4cc0fd3d157d137b2888c42b273ac87857e6b766627fdef5d10142474a5d818f\" returns successfully" Apr 16 23:30:24.729469 containerd[1891]: time="2026-04-16T23:30:24.729174793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-sps9h,Uid:6d6c9639-adff-47fe-bf72-b1bd597ea0d0,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:30:24.770779 containerd[1891]: time="2026-04-16T23:30:24.770559692Z" level=info msg="connecting to shim 1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905" address="unix:///run/containerd/s/75738b0b9d1efbea77accbd2fe554d3ca60b6ed536f3d91a6b0614bec44f0056" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:24.785639 systemd[1]: Started cri-containerd-1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905.scope - libcontainer container 1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905. Apr 16 23:30:24.823944 containerd[1891]: time="2026-04-16T23:30:24.823895208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-sps9h,Uid:6d6c9639-adff-47fe-bf72-b1bd597ea0d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905\"" Apr 16 23:30:24.826720 containerd[1891]: time="2026-04-16T23:30:24.826664257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:30:25.168447 kubelet[3413]: I0416 23:30:25.168386 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2kmnp" podStartSLOduration=2.168368698 podStartE2EDuration="2.168368698s" podCreationTimestamp="2026-04-16 23:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:25.167365968 +0000 UTC m=+8.150849229" watchObservedRunningTime="2026-04-16 23:30:25.168368698 +0000 UTC m=+8.151851959" Apr 16 23:30:26.272032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135568453.mount: Deactivated successfully. Apr 16 23:30:26.729741 containerd[1891]: time="2026-04-16T23:30:26.729682457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:26.733107 containerd[1891]: time="2026-04-16T23:30:26.733069714Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 23:30:26.736804 containerd[1891]: time="2026-04-16T23:30:26.736752938Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:26.742083 containerd[1891]: time="2026-04-16T23:30:26.742029876Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:26.742577 containerd[1891]: time="2026-04-16T23:30:26.742357533Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 1.915658251s" Apr 16 23:30:26.742577 containerd[1891]: time="2026-04-16T23:30:26.742388478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 23:30:26.751850 containerd[1891]: time="2026-04-16T23:30:26.751780876Z" level=info msg="CreateContainer within sandbox \"1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:30:26.779321 containerd[1891]: time="2026-04-16T23:30:26.778249513Z" level=info msg="Container c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:26.795771 containerd[1891]: time="2026-04-16T23:30:26.795728418Z" level=info msg="CreateContainer within sandbox \"1b67f085fda9ef200843f02bc0d079f0ea2d9f4428d186be93cca8c3c6881905\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3\"" Apr 16 23:30:26.796326 containerd[1891]: time="2026-04-16T23:30:26.796302281Z" level=info msg="StartContainer for \"c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3\"" Apr 16 23:30:26.797432 containerd[1891]: time="2026-04-16T23:30:26.797405950Z" level=info msg="connecting to shim c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3" address="unix:///run/containerd/s/75738b0b9d1efbea77accbd2fe554d3ca60b6ed536f3d91a6b0614bec44f0056" protocol=ttrpc version=3 Apr 16 23:30:26.813659 systemd[1]: Started cri-containerd-c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3.scope - libcontainer container c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3. Apr 16 23:30:26.841975 containerd[1891]: time="2026-04-16T23:30:26.841904635Z" level=info msg="StartContainer for \"c65c31c872a44df012724274715132c788d38ce8beefcf7262fe639fcd2d78d3\" returns successfully" Apr 16 23:30:27.178527 kubelet[3413]: I0416 23:30:27.178420 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-sps9h" podStartSLOduration=1.261431695 podStartE2EDuration="3.17840482s" podCreationTimestamp="2026-04-16 23:30:24 +0000 UTC" firstStartedPulling="2026-04-16 23:30:24.826239246 +0000 UTC m=+7.809722515" lastFinishedPulling="2026-04-16 23:30:26.743212371 +0000 UTC m=+9.726695640" observedRunningTime="2026-04-16 23:30:27.177918583 +0000 UTC m=+10.161401844" watchObservedRunningTime="2026-04-16 23:30:27.17840482 +0000 UTC m=+10.161888081" Apr 16 23:30:31.988686 sudo[2384]: pam_unix(sudo:session): session closed for user root Apr 16 23:30:32.132363 sshd[2383]: Connection closed by 20.229.252.112 port 43732 Apr 16 23:30:32.132977 sshd-session[2380]: pam_unix(sshd:session): session closed for user core Apr 16 23:30:32.137945 systemd-logind[1868]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:30:32.138584 systemd[1]: sshd@6-10.0.0.6:22-20.229.252.112:43732.service: Deactivated successfully. Apr 16 23:30:32.141953 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:30:32.143767 systemd[1]: session-9.scope: Consumed 3.173s CPU time, 221.5M memory peak. Apr 16 23:30:32.146941 systemd-logind[1868]: Removed session 9. Apr 16 23:30:35.605853 systemd[1]: Created slice kubepods-besteffort-podb6062fe1_d7bc_44e9_bbca_5ea90eed5930.slice - libcontainer container kubepods-besteffort-podb6062fe1_d7bc_44e9_bbca_5ea90eed5930.slice. Apr 16 23:30:35.631850 kubelet[3413]: I0416 23:30:35.631734 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5npt\" (UniqueName: \"kubernetes.io/projected/b6062fe1-d7bc-44e9-bbca-5ea90eed5930-kube-api-access-q5npt\") pod \"calico-typha-5c4849ccdd-gplbh\" (UID: \"b6062fe1-d7bc-44e9-bbca-5ea90eed5930\") " pod="calico-system/calico-typha-5c4849ccdd-gplbh" Apr 16 23:30:35.631850 kubelet[3413]: I0416 23:30:35.631770 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6062fe1-d7bc-44e9-bbca-5ea90eed5930-typha-certs\") pod \"calico-typha-5c4849ccdd-gplbh\" (UID: \"b6062fe1-d7bc-44e9-bbca-5ea90eed5930\") " pod="calico-system/calico-typha-5c4849ccdd-gplbh" Apr 16 23:30:35.631850 kubelet[3413]: I0416 23:30:35.631783 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6062fe1-d7bc-44e9-bbca-5ea90eed5930-tigera-ca-bundle\") pod \"calico-typha-5c4849ccdd-gplbh\" (UID: \"b6062fe1-d7bc-44e9-bbca-5ea90eed5930\") " pod="calico-system/calico-typha-5c4849ccdd-gplbh" Apr 16 23:30:35.728622 systemd[1]: Created slice kubepods-besteffort-pod49c39a11_7573_48c1_b255_d956d98108d1.slice - libcontainer container kubepods-besteffort-pod49c39a11_7573_48c1_b255_d956d98108d1.slice. Apr 16 23:30:35.833532 kubelet[3413]: I0416 23:30:35.833452 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-cni-bin-dir\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.833532 kubelet[3413]: I0416 23:30:35.833493 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-cni-log-dir\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.833532 kubelet[3413]: I0416 23:30:35.833506 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-lib-modules\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.833532 kubelet[3413]: I0416 23:30:35.833524 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-xtables-lock\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.833532 kubelet[3413]: I0416 23:30:35.833538 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49c39a11-7573-48c1-b255-d956d98108d1-node-certs\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840823 kubelet[3413]: I0416 23:30:35.833547 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c39a11-7573-48c1-b255-d956d98108d1-tigera-ca-bundle\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840823 kubelet[3413]: I0416 23:30:35.833560 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-bpffs\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840823 kubelet[3413]: I0416 23:30:35.833569 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-policysync\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840823 kubelet[3413]: I0416 23:30:35.833579 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-sys-fs\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840823 kubelet[3413]: I0416 23:30:35.833589 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-var-run-calico\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840915 kubelet[3413]: I0416 23:30:35.833598 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxsp\" (UniqueName: \"kubernetes.io/projected/49c39a11-7573-48c1-b255-d956d98108d1-kube-api-access-9dxsp\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840915 kubelet[3413]: I0416 23:30:35.833610 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-flexvol-driver-host\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840915 kubelet[3413]: I0416 23:30:35.833621 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-var-lib-calico\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840915 kubelet[3413]: I0416 23:30:35.833631 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-cni-net-dir\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.840915 kubelet[3413]: I0416 23:30:35.833640 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/49c39a11-7573-48c1-b255-d956d98108d1-nodeproc\") pod \"calico-node-xg97p\" (UID: \"49c39a11-7573-48c1-b255-d956d98108d1\") " pod="calico-system/calico-node-xg97p" Apr 16 23:30:35.842433 kubelet[3413]: E0416 23:30:35.842382 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:35.916545 containerd[1891]: time="2026-04-16T23:30:35.916202734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c4849ccdd-gplbh,Uid:b6062fe1-d7bc-44e9-bbca-5ea90eed5930,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:35.935273 kubelet[3413]: I0416 23:30:35.934732 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92439691-ea93-41b8-af87-604eaab62246-registration-dir\") pod \"csi-node-driver-kvtzd\" (UID: \"92439691-ea93-41b8-af87-604eaab62246\") " pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:35.935273 kubelet[3413]: I0416 23:30:35.934809 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/92439691-ea93-41b8-af87-604eaab62246-varrun\") pod \"csi-node-driver-kvtzd\" (UID: \"92439691-ea93-41b8-af87-604eaab62246\") " pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:35.935273 kubelet[3413]: I0416 23:30:35.934831 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92439691-ea93-41b8-af87-604eaab62246-kubelet-dir\") pod \"csi-node-driver-kvtzd\" (UID: \"92439691-ea93-41b8-af87-604eaab62246\") " pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:35.935273 kubelet[3413]: I0416 23:30:35.934840 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smj2b\" (UniqueName: \"kubernetes.io/projected/92439691-ea93-41b8-af87-604eaab62246-kube-api-access-smj2b\") pod \"csi-node-driver-kvtzd\" (UID: \"92439691-ea93-41b8-af87-604eaab62246\") " pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:35.935273 kubelet[3413]: I0416 23:30:35.934866 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92439691-ea93-41b8-af87-604eaab62246-socket-dir\") pod \"csi-node-driver-kvtzd\" (UID: \"92439691-ea93-41b8-af87-604eaab62246\") " pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:35.937509 kubelet[3413]: E0416 23:30:35.937470 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.937692 kubelet[3413]: W0416 23:30:35.937677 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.937828 kubelet[3413]: E0416 23:30:35.937758 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.937983 kubelet[3413]: E0416 23:30:35.937973 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.938046 kubelet[3413]: W0416 23:30:35.938035 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.938105 kubelet[3413]: E0416 23:30:35.938091 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.940965 kubelet[3413]: E0416 23:30:35.940945 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.941165 kubelet[3413]: W0416 23:30:35.941044 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.941165 kubelet[3413]: E0416 23:30:35.941061 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.945118 kubelet[3413]: E0416 23:30:35.945093 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.945506 kubelet[3413]: W0416 23:30:35.945260 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.945506 kubelet[3413]: E0416 23:30:35.945279 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.946781 kubelet[3413]: E0416 23:30:35.946533 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.946781 kubelet[3413]: W0416 23:30:35.946548 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.946781 kubelet[3413]: E0416 23:30:35.946559 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.947145 kubelet[3413]: E0416 23:30:35.947034 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.947145 kubelet[3413]: W0416 23:30:35.947048 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.947145 kubelet[3413]: E0416 23:30:35.947058 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.947351 kubelet[3413]: E0416 23:30:35.947340 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.947411 kubelet[3413]: W0416 23:30:35.947390 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.947466 kubelet[3413]: E0416 23:30:35.947455 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.947790 kubelet[3413]: E0416 23:30:35.947717 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.947790 kubelet[3413]: W0416 23:30:35.947727 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.947790 kubelet[3413]: E0416 23:30:35.947735 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.948012 kubelet[3413]: E0416 23:30:35.948001 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.950300 kubelet[3413]: W0416 23:30:35.948069 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.950691 kubelet[3413]: E0416 23:30:35.950579 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.951954 kubelet[3413]: E0416 23:30:35.951799 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.951954 kubelet[3413]: W0416 23:30:35.951905 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.951954 kubelet[3413]: E0416 23:30:35.951919 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.952228 kubelet[3413]: E0416 23:30:35.952203 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.952228 kubelet[3413]: W0416 23:30:35.952217 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.952228 kubelet[3413]: E0416 23:30:35.952226 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.952401 kubelet[3413]: E0416 23:30:35.952385 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.952401 kubelet[3413]: W0416 23:30:35.952396 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.952559 kubelet[3413]: E0416 23:30:35.952406 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.952733 kubelet[3413]: E0416 23:30:35.952714 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.952776 kubelet[3413]: W0416 23:30:35.952729 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.952776 kubelet[3413]: E0416 23:30:35.952756 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.952912 kubelet[3413]: E0416 23:30:35.952898 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.952912 kubelet[3413]: W0416 23:30:35.952907 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.952912 kubelet[3413]: E0416 23:30:35.952914 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.953079 kubelet[3413]: E0416 23:30:35.953057 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.953079 kubelet[3413]: W0416 23:30:35.953067 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.953236 kubelet[3413]: E0416 23:30:35.953073 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.953412 kubelet[3413]: E0416 23:30:35.953396 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.953412 kubelet[3413]: W0416 23:30:35.953408 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.953514 kubelet[3413]: E0416 23:30:35.953418 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.953571 kubelet[3413]: E0416 23:30:35.953557 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.953571 kubelet[3413]: W0416 23:30:35.953568 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.953703 kubelet[3413]: E0416 23:30:35.953575 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.953780 kubelet[3413]: E0416 23:30:35.953764 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.953780 kubelet[3413]: W0416 23:30:35.953774 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.953897 kubelet[3413]: E0416 23:30:35.953781 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.953897 kubelet[3413]: E0416 23:30:35.953888 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.953897 kubelet[3413]: W0416 23:30:35.953894 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.953976 kubelet[3413]: E0416 23:30:35.953900 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954105 kubelet[3413]: E0416 23:30:35.954089 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954105 kubelet[3413]: W0416 23:30:35.954099 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954105 kubelet[3413]: E0416 23:30:35.954105 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954312 kubelet[3413]: E0416 23:30:35.954296 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954312 kubelet[3413]: W0416 23:30:35.954305 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954312 kubelet[3413]: E0416 23:30:35.954312 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954443 kubelet[3413]: E0416 23:30:35.954437 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954443 kubelet[3413]: W0416 23:30:35.954444 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954482 kubelet[3413]: E0416 23:30:35.954450 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954603 kubelet[3413]: E0416 23:30:35.954585 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954603 kubelet[3413]: W0416 23:30:35.954592 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954603 kubelet[3413]: E0416 23:30:35.954598 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954723 kubelet[3413]: E0416 23:30:35.954712 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954723 kubelet[3413]: W0416 23:30:35.954720 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954777 kubelet[3413]: E0416 23:30:35.954726 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.954908 kubelet[3413]: E0416 23:30:35.954894 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.954908 kubelet[3413]: W0416 23:30:35.954905 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.954908 kubelet[3413]: E0416 23:30:35.954912 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.955094 kubelet[3413]: E0416 23:30:35.955080 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.955094 kubelet[3413]: W0416 23:30:35.955090 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.955094 kubelet[3413]: E0416 23:30:35.955096 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.955311 kubelet[3413]: E0416 23:30:35.955299 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.955311 kubelet[3413]: W0416 23:30:35.955307 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.955366 kubelet[3413]: E0416 23:30:35.955314 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.955588 kubelet[3413]: E0416 23:30:35.955568 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.955588 kubelet[3413]: W0416 23:30:35.955586 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.955675 kubelet[3413]: E0416 23:30:35.955597 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.955883 kubelet[3413]: E0416 23:30:35.955849 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.955883 kubelet[3413]: W0416 23:30:35.955881 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.955934 kubelet[3413]: E0416 23:30:35.955890 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.956091 kubelet[3413]: E0416 23:30:35.956075 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.956091 kubelet[3413]: W0416 23:30:35.956087 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.956149 kubelet[3413]: E0416 23:30:35.956096 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.956286 kubelet[3413]: E0416 23:30:35.956271 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.956286 kubelet[3413]: W0416 23:30:35.956283 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.956339 kubelet[3413]: E0416 23:30:35.956290 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.956455 kubelet[3413]: E0416 23:30:35.956438 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:35.956455 kubelet[3413]: W0416 23:30:35.956449 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:35.956455 kubelet[3413]: E0416 23:30:35.956455 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:35.964057 containerd[1891]: time="2026-04-16T23:30:35.963940587Z" level=info msg="connecting to shim a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1" address="unix:///run/containerd/s/a6931a402a1bedcbffee7bc84f44ca907af5069c6ce4ada7e55b330266c74dbf" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:35.982653 systemd[1]: Started cri-containerd-a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1.scope - libcontainer container a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1. Apr 16 23:30:36.015731 containerd[1891]: time="2026-04-16T23:30:36.015690297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c4849ccdd-gplbh,Uid:b6062fe1-d7bc-44e9-bbca-5ea90eed5930,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1\"" Apr 16 23:30:36.018735 containerd[1891]: time="2026-04-16T23:30:36.018705640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:30:36.035790 kubelet[3413]: E0416 23:30:36.035755 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.035790 kubelet[3413]: W0416 23:30:36.035780 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.035790 kubelet[3413]: E0416 23:30:36.035801 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.035987 kubelet[3413]: E0416 23:30:36.035969 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.035987 kubelet[3413]: W0416 23:30:36.035981 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036129 kubelet[3413]: E0416 23:30:36.035989 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.036177 kubelet[3413]: E0416 23:30:36.036153 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.036177 kubelet[3413]: W0416 23:30:36.036160 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036177 kubelet[3413]: E0416 23:30:36.036167 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.036358 kubelet[3413]: E0416 23:30:36.036343 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.036358 kubelet[3413]: W0416 23:30:36.036354 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036408 kubelet[3413]: E0416 23:30:36.036361 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.036528 kubelet[3413]: E0416 23:30:36.036515 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.036528 kubelet[3413]: W0416 23:30:36.036524 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036583 kubelet[3413]: E0416 23:30:36.036533 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.036705 kubelet[3413]: E0416 23:30:36.036690 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.036705 kubelet[3413]: W0416 23:30:36.036705 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036757 kubelet[3413]: E0416 23:30:36.036711 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.036829 kubelet[3413]: E0416 23:30:36.036815 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.036829 kubelet[3413]: W0416 23:30:36.036824 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.036949 kubelet[3413]: E0416 23:30:36.036830 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.037014 kubelet[3413]: E0416 23:30:36.037002 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.037014 kubelet[3413]: W0416 23:30:36.037010 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.037063 kubelet[3413]: E0416 23:30:36.037016 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.037556 kubelet[3413]: E0416 23:30:36.037536 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.037556 kubelet[3413]: W0416 23:30:36.037556 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.037567 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.037705 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039699 kubelet[3413]: W0416 23:30:36.037710 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.037816 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.037973 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039699 kubelet[3413]: W0416 23:30:36.037981 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.037990 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.038200 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039699 kubelet[3413]: W0416 23:30:36.038210 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039699 kubelet[3413]: E0416 23:30:36.038220 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038381 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039850 kubelet[3413]: W0416 23:30:36.038391 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038398 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038609 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039850 kubelet[3413]: W0416 23:30:36.038617 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038632 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038752 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039850 kubelet[3413]: W0416 23:30:36.038760 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038766 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039850 kubelet[3413]: E0416 23:30:36.038898 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039985 kubelet[3413]: W0416 23:30:36.038903 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.038909 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.039019 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039985 kubelet[3413]: W0416 23:30:36.039025 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.039030 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.039181 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039985 kubelet[3413]: W0416 23:30:36.039187 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.039194 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.039985 kubelet[3413]: E0416 23:30:36.039326 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.039985 kubelet[3413]: W0416 23:30:36.039332 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.040125 kubelet[3413]: E0416 23:30:36.039337 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.040125 kubelet[3413]: E0416 23:30:36.039444 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.040125 kubelet[3413]: W0416 23:30:36.039449 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.040125 kubelet[3413]: E0416 23:30:36.039455 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.040331 kubelet[3413]: E0416 23:30:36.040228 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.040331 kubelet[3413]: W0416 23:30:36.040242 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.040331 kubelet[3413]: E0416 23:30:36.040254 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.040766 kubelet[3413]: E0416 23:30:36.040398 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.040766 kubelet[3413]: W0416 23:30:36.040405 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.040766 kubelet[3413]: E0416 23:30:36.040414 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.040766 kubelet[3413]: E0416 23:30:36.040587 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.040766 kubelet[3413]: W0416 23:30:36.040595 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.040766 kubelet[3413]: E0416 23:30:36.040603 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.040937 kubelet[3413]: E0416 23:30:36.040899 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.041089 kubelet[3413]: W0416 23:30:36.040974 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.041089 kubelet[3413]: E0416 23:30:36.040989 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.041219 kubelet[3413]: E0416 23:30:36.041207 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.041269 kubelet[3413]: W0416 23:30:36.041260 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.041308 kubelet[3413]: E0416 23:30:36.041299 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.042372 containerd[1891]: time="2026-04-16T23:30:36.042335044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xg97p,Uid:49c39a11-7573-48c1-b255-d956d98108d1,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.053018 kubelet[3413]: E0416 23:30:36.052988 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:36.053018 kubelet[3413]: W0416 23:30:36.053009 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:36.053018 kubelet[3413]: E0416 23:30:36.053026 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:36.097694 containerd[1891]: time="2026-04-16T23:30:36.097643287Z" level=info msg="connecting to shim a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599" address="unix:///run/containerd/s/bd69de76d8bd3d7284cd0b44985a0010e6618611ac3200f03622cb13ef944319" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:36.116678 systemd[1]: Started cri-containerd-a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599.scope - libcontainer container a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599. Apr 16 23:30:36.141802 containerd[1891]: time="2026-04-16T23:30:36.141752836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xg97p,Uid:49c39a11-7573-48c1-b255-d956d98108d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\"" Apr 16 23:30:37.123891 kubelet[3413]: E0416 23:30:37.123744 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:37.380606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360857699.mount: Deactivated successfully. Apr 16 23:30:38.040216 containerd[1891]: time="2026-04-16T23:30:38.039850646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:38.043715 containerd[1891]: time="2026-04-16T23:30:38.043433988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 16 23:30:38.048038 containerd[1891]: time="2026-04-16T23:30:38.047920002Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:38.055220 containerd[1891]: time="2026-04-16T23:30:38.055160560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:38.055901 containerd[1891]: time="2026-04-16T23:30:38.055795313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.037055408s" Apr 16 23:30:38.055901 containerd[1891]: time="2026-04-16T23:30:38.055823977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 23:30:38.057521 containerd[1891]: time="2026-04-16T23:30:38.057313016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:30:38.079691 containerd[1891]: time="2026-04-16T23:30:38.079650642Z" level=info msg="CreateContainer within sandbox \"a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:30:38.105627 containerd[1891]: time="2026-04-16T23:30:38.105573955Z" level=info msg="Container 1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:38.109260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635986823.mount: Deactivated successfully. Apr 16 23:30:38.127726 containerd[1891]: time="2026-04-16T23:30:38.127681591Z" level=info msg="CreateContainer within sandbox \"a3f4f08c45729486ac1a958448076161f7f5db3af6c9e721b52a8607c7f4a6b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43\"" Apr 16 23:30:38.128506 containerd[1891]: time="2026-04-16T23:30:38.128418050Z" level=info msg="StartContainer for \"1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43\"" Apr 16 23:30:38.130922 containerd[1891]: time="2026-04-16T23:30:38.130877763Z" level=info msg="connecting to shim 1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43" address="unix:///run/containerd/s/a6931a402a1bedcbffee7bc84f44ca907af5069c6ce4ada7e55b330266c74dbf" protocol=ttrpc version=3 Apr 16 23:30:38.150653 systemd[1]: Started cri-containerd-1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43.scope - libcontainer container 1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43. Apr 16 23:30:38.191177 containerd[1891]: time="2026-04-16T23:30:38.191137520Z" level=info msg="StartContainer for \"1f4c635a51cffcbb58a4b9bfac21f171f634692b6b2e8e91ce214323accacb43\" returns successfully" Apr 16 23:30:39.123747 kubelet[3413]: E0416 23:30:39.123707 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:39.227810 kubelet[3413]: E0416 23:30:39.227774 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.227810 kubelet[3413]: W0416 23:30:39.227800 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.227810 kubelet[3413]: E0416 23:30:39.227822 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228072 kubelet[3413]: E0416 23:30:39.227956 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228072 kubelet[3413]: W0416 23:30:39.227963 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228072 kubelet[3413]: E0416 23:30:39.227996 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228153 kubelet[3413]: E0416 23:30:39.228096 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228153 kubelet[3413]: W0416 23:30:39.228101 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228153 kubelet[3413]: E0416 23:30:39.228106 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228243 kubelet[3413]: E0416 23:30:39.228194 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228243 kubelet[3413]: W0416 23:30:39.228199 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228243 kubelet[3413]: E0416 23:30:39.228204 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228313 kubelet[3413]: E0416 23:30:39.228292 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228313 kubelet[3413]: W0416 23:30:39.228296 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228313 kubelet[3413]: E0416 23:30:39.228301 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228419 kubelet[3413]: E0416 23:30:39.228380 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228419 kubelet[3413]: W0416 23:30:39.228385 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228419 kubelet[3413]: E0416 23:30:39.228390 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228483 kubelet[3413]: E0416 23:30:39.228461 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228483 kubelet[3413]: W0416 23:30:39.228465 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228483 kubelet[3413]: E0416 23:30:39.228469 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228646 kubelet[3413]: E0416 23:30:39.228567 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228646 kubelet[3413]: W0416 23:30:39.228572 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228646 kubelet[3413]: E0416 23:30:39.228578 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228739 kubelet[3413]: E0416 23:30:39.228665 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228739 kubelet[3413]: W0416 23:30:39.228669 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228739 kubelet[3413]: E0416 23:30:39.228674 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228739 kubelet[3413]: E0416 23:30:39.228745 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228842 kubelet[3413]: W0416 23:30:39.228749 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228842 kubelet[3413]: E0416 23:30:39.228753 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.228842 kubelet[3413]: E0416 23:30:39.228822 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.228842 kubelet[3413]: W0416 23:30:39.228826 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.228842 kubelet[3413]: E0416 23:30:39.228830 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.229004 kubelet[3413]: E0416 23:30:39.228904 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.229004 kubelet[3413]: W0416 23:30:39.228907 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.229004 kubelet[3413]: E0416 23:30:39.228911 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.229004 kubelet[3413]: E0416 23:30:39.228988 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.229004 kubelet[3413]: W0416 23:30:39.228992 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.229004 kubelet[3413]: E0416 23:30:39.228996 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.229186 kubelet[3413]: E0416 23:30:39.229075 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.229186 kubelet[3413]: W0416 23:30:39.229078 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.229186 kubelet[3413]: E0416 23:30:39.229083 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.229186 kubelet[3413]: E0416 23:30:39.229150 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.229186 kubelet[3413]: W0416 23:30:39.229154 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.229186 kubelet[3413]: E0416 23:30:39.229158 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.256844 kubelet[3413]: E0416 23:30:39.256806 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.256844 kubelet[3413]: W0416 23:30:39.256832 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.256844 kubelet[3413]: E0416 23:30:39.256852 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.257118 kubelet[3413]: E0416 23:30:39.257003 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.257118 kubelet[3413]: W0416 23:30:39.257010 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.257118 kubelet[3413]: E0416 23:30:39.257017 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.257185 kubelet[3413]: E0416 23:30:39.257178 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.257335 kubelet[3413]: W0416 23:30:39.257185 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.257335 kubelet[3413]: E0416 23:30:39.257192 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.257460 kubelet[3413]: E0416 23:30:39.257446 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.257545 kubelet[3413]: W0416 23:30:39.257533 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.257629 kubelet[3413]: E0416 23:30:39.257617 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.257925 kubelet[3413]: E0416 23:30:39.257875 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.257925 kubelet[3413]: W0416 23:30:39.257887 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.257925 kubelet[3413]: E0416 23:30:39.257895 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.258290 kubelet[3413]: E0416 23:30:39.258234 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.258290 kubelet[3413]: W0416 23:30:39.258266 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.258444 kubelet[3413]: E0416 23:30:39.258277 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.258644 kubelet[3413]: E0416 23:30:39.258632 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.258809 kubelet[3413]: W0416 23:30:39.258673 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.258809 kubelet[3413]: E0416 23:30:39.258685 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.258963 kubelet[3413]: E0416 23:30:39.258951 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.259028 kubelet[3413]: W0416 23:30:39.259019 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.259081 kubelet[3413]: E0416 23:30:39.259071 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.259367 kubelet[3413]: E0416 23:30:39.259275 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.259367 kubelet[3413]: W0416 23:30:39.259285 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.259367 kubelet[3413]: E0416 23:30:39.259293 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.259524 kubelet[3413]: E0416 23:30:39.259515 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.259581 kubelet[3413]: W0416 23:30:39.259571 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.259634 kubelet[3413]: E0416 23:30:39.259622 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.259924 kubelet[3413]: E0416 23:30:39.259837 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.259924 kubelet[3413]: W0416 23:30:39.259849 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.259924 kubelet[3413]: E0416 23:30:39.259857 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.260125 kubelet[3413]: E0416 23:30:39.260113 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.260293 kubelet[3413]: W0416 23:30:39.260169 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.260293 kubelet[3413]: E0416 23:30:39.260181 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.260450 kubelet[3413]: E0416 23:30:39.260419 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.260734 kubelet[3413]: W0416 23:30:39.260529 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.260734 kubelet[3413]: E0416 23:30:39.260546 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.260797 kubelet[3413]: E0416 23:30:39.260770 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.260797 kubelet[3413]: W0416 23:30:39.260781 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.260797 kubelet[3413]: E0416 23:30:39.260791 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.260916 kubelet[3413]: E0416 23:30:39.260897 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.260916 kubelet[3413]: W0416 23:30:39.260908 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.260916 kubelet[3413]: E0416 23:30:39.260916 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.261048 kubelet[3413]: E0416 23:30:39.261037 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.261048 kubelet[3413]: W0416 23:30:39.261045 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.261094 kubelet[3413]: E0416 23:30:39.261052 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.261280 kubelet[3413]: E0416 23:30:39.261265 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.261280 kubelet[3413]: W0416 23:30:39.261277 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.261327 kubelet[3413]: E0416 23:30:39.261286 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.261656 kubelet[3413]: E0416 23:30:39.261639 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:39.261656 kubelet[3413]: W0416 23:30:39.261652 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:39.261716 kubelet[3413]: E0416 23:30:39.261662 3413 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:39.449133 containerd[1891]: time="2026-04-16T23:30:39.448515091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:39.456632 containerd[1891]: time="2026-04-16T23:30:39.456590076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 16 23:30:39.461032 containerd[1891]: time="2026-04-16T23:30:39.460972286Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:39.468271 containerd[1891]: time="2026-04-16T23:30:39.467878937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:39.468524 containerd[1891]: time="2026-04-16T23:30:39.468232530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.410893553s" Apr 16 23:30:39.468617 containerd[1891]: time="2026-04-16T23:30:39.468603308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 23:30:39.477685 containerd[1891]: time="2026-04-16T23:30:39.477647575Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:30:39.497075 containerd[1891]: time="2026-04-16T23:30:39.496086653Z" level=info msg="Container d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:39.517633 containerd[1891]: time="2026-04-16T23:30:39.517588626Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604\"" Apr 16 23:30:39.518408 containerd[1891]: time="2026-04-16T23:30:39.518378591Z" level=info msg="StartContainer for \"d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604\"" Apr 16 23:30:39.521112 containerd[1891]: time="2026-04-16T23:30:39.521070773Z" level=info msg="connecting to shim d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604" address="unix:///run/containerd/s/bd69de76d8bd3d7284cd0b44985a0010e6618611ac3200f03622cb13ef944319" protocol=ttrpc version=3 Apr 16 23:30:39.542690 systemd[1]: Started cri-containerd-d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604.scope - libcontainer container d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604. Apr 16 23:30:39.599505 containerd[1891]: time="2026-04-16T23:30:39.599453806Z" level=info msg="StartContainer for \"d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604\" returns successfully" Apr 16 23:30:39.604096 systemd[1]: cri-containerd-d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604.scope: Deactivated successfully. Apr 16 23:30:39.609208 containerd[1891]: time="2026-04-16T23:30:39.609134737Z" level=info msg="received container exit event container_id:\"d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604\" id:\"d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604\" pid:4061 exited_at:{seconds:1776382239 nanos:608480376}" Apr 16 23:30:39.627611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d24a7cc83d4ecbc714c79cd87b1bab8cca4f96a376aea6596d9db3c0bd0cf604-rootfs.mount: Deactivated successfully. Apr 16 23:30:40.197807 kubelet[3413]: I0416 23:30:40.197775 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:30:40.220021 kubelet[3413]: I0416 23:30:40.219956 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c4849ccdd-gplbh" podStartSLOduration=3.181105719 podStartE2EDuration="5.219944462s" podCreationTimestamp="2026-04-16 23:30:35 +0000 UTC" firstStartedPulling="2026-04-16 23:30:36.017778343 +0000 UTC m=+19.001261604" lastFinishedPulling="2026-04-16 23:30:38.056617086 +0000 UTC m=+21.040100347" observedRunningTime="2026-04-16 23:30:39.211451253 +0000 UTC m=+22.194934522" watchObservedRunningTime="2026-04-16 23:30:40.219944462 +0000 UTC m=+23.203427723" Apr 16 23:30:41.124312 kubelet[3413]: E0416 23:30:41.123179 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:41.205989 containerd[1891]: time="2026-04-16T23:30:41.205835719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:30:43.123577 kubelet[3413]: E0416 23:30:43.123331 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:45.124536 kubelet[3413]: E0416 23:30:45.122946 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:45.537426 kubelet[3413]: I0416 23:30:45.536938 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:30:46.541298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099476469.mount: Deactivated successfully. Apr 16 23:30:46.954024 containerd[1891]: time="2026-04-16T23:30:46.953521193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.957276 containerd[1891]: time="2026-04-16T23:30:46.957247489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 23:30:46.961037 containerd[1891]: time="2026-04-16T23:30:46.960986633Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.965760 containerd[1891]: time="2026-04-16T23:30:46.965717003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.966284 containerd[1891]: time="2026-04-16T23:30:46.965995594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 5.759525386s" Apr 16 23:30:46.966284 containerd[1891]: time="2026-04-16T23:30:46.966025755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 23:30:46.976308 containerd[1891]: time="2026-04-16T23:30:46.976274179Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:30:47.008011 containerd[1891]: time="2026-04-16T23:30:47.007194440Z" level=info msg="Container e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:47.029901 containerd[1891]: time="2026-04-16T23:30:47.029857881Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4\"" Apr 16 23:30:47.031326 containerd[1891]: time="2026-04-16T23:30:47.031295150Z" level=info msg="StartContainer for \"e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4\"" Apr 16 23:30:47.032505 containerd[1891]: time="2026-04-16T23:30:47.032465076Z" level=info msg="connecting to shim e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4" address="unix:///run/containerd/s/bd69de76d8bd3d7284cd0b44985a0010e6618611ac3200f03622cb13ef944319" protocol=ttrpc version=3 Apr 16 23:30:47.049646 systemd[1]: Started cri-containerd-e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4.scope - libcontainer container e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4. Apr 16 23:30:47.102917 containerd[1891]: time="2026-04-16T23:30:47.102874139Z" level=info msg="StartContainer for \"e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4\" returns successfully" Apr 16 23:30:47.125470 kubelet[3413]: E0416 23:30:47.125425 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:47.136920 systemd[1]: cri-containerd-e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4.scope: Deactivated successfully. Apr 16 23:30:47.141817 containerd[1891]: time="2026-04-16T23:30:47.141724637Z" level=info msg="received container exit event container_id:\"e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4\" id:\"e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4\" pid:4119 exited_at:{seconds:1776382247 nanos:140098323}" Apr 16 23:30:47.159851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e30adb526fe2d8d41c4df2f0d1f24beda674c264df07cc5ad332c883a16a65d4-rootfs.mount: Deactivated successfully. Apr 16 23:30:49.123507 kubelet[3413]: E0416 23:30:49.123108 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:49.224832 containerd[1891]: time="2026-04-16T23:30:49.224774472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:30:51.123661 kubelet[3413]: E0416 23:30:51.122995 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:52.648512 containerd[1891]: time="2026-04-16T23:30:52.648277978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.651673 containerd[1891]: time="2026-04-16T23:30:52.651504094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 23:30:52.655158 containerd[1891]: time="2026-04-16T23:30:52.655132099Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.662190 containerd[1891]: time="2026-04-16T23:30:52.662156176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.663029 containerd[1891]: time="2026-04-16T23:30:52.663003294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.438178285s" Apr 16 23:30:52.663054 containerd[1891]: time="2026-04-16T23:30:52.663032567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 23:30:52.671357 containerd[1891]: time="2026-04-16T23:30:52.671321317Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:30:52.695624 containerd[1891]: time="2026-04-16T23:30:52.694832339Z" level=info msg="Container 6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:52.698818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544885647.mount: Deactivated successfully. Apr 16 23:30:52.717837 containerd[1891]: time="2026-04-16T23:30:52.717796531Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301\"" Apr 16 23:30:52.718609 containerd[1891]: time="2026-04-16T23:30:52.718586375Z" level=info msg="StartContainer for \"6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301\"" Apr 16 23:30:52.719928 containerd[1891]: time="2026-04-16T23:30:52.719823695Z" level=info msg="connecting to shim 6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301" address="unix:///run/containerd/s/bd69de76d8bd3d7284cd0b44985a0010e6618611ac3200f03622cb13ef944319" protocol=ttrpc version=3 Apr 16 23:30:52.739639 systemd[1]: Started cri-containerd-6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301.scope - libcontainer container 6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301. Apr 16 23:30:52.793030 containerd[1891]: time="2026-04-16T23:30:52.792979365Z" level=info msg="StartContainer for \"6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301\" returns successfully" Apr 16 23:30:53.123839 kubelet[3413]: E0416 23:30:53.123509 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:54.737047 containerd[1891]: time="2026-04-16T23:30:54.737002979Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:30:54.740134 systemd[1]: cri-containerd-6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301.scope: Deactivated successfully. Apr 16 23:30:54.740716 systemd[1]: cri-containerd-6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301.scope: Consumed 322ms CPU time, 191.3M memory peak, 171.3M written to disk. Apr 16 23:30:54.742550 containerd[1891]: time="2026-04-16T23:30:54.742303215Z" level=info msg="received container exit event container_id:\"6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301\" id:\"6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301\" pid:4173 exited_at:{seconds:1776382254 nanos:741899245}" Apr 16 23:30:54.759932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d17290f82460760cb979b5e94c027591b003f29b046dea1911a15fb6bd58301-rootfs.mount: Deactivated successfully. Apr 16 23:30:54.789840 kubelet[3413]: I0416 23:30:54.789796 3413 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 23:30:54.844558 systemd[1]: Created slice kubepods-burstable-poddfde2fc8_fcb9_4ca1_b70f_520a1c61e6a3.slice - libcontainer container kubepods-burstable-poddfde2fc8_fcb9_4ca1_b70f_520a1c61e6a3.slice. Apr 16 23:30:54.853822 systemd[1]: Created slice kubepods-burstable-pod2aa4ae73_6822_4401_8a34_bdea0014c865.slice - libcontainer container kubepods-burstable-pod2aa4ae73_6822_4401_8a34_bdea0014c865.slice. Apr 16 23:30:54.864804 systemd[1]: Created slice kubepods-besteffort-pod46758d21_42a0_427d_9d3c_b1598016f3d7.slice - libcontainer container kubepods-besteffort-pod46758d21_42a0_427d_9d3c_b1598016f3d7.slice. Apr 16 23:30:54.874591 systemd[1]: Created slice kubepods-besteffort-pod8c36f0ea_9b51_4ee6_bb00_0629e5a07aed.slice - libcontainer container kubepods-besteffort-pod8c36f0ea_9b51_4ee6_bb00_0629e5a07aed.slice. Apr 16 23:30:54.880938 systemd[1]: Created slice kubepods-besteffort-pod8ec5a596_8f22_44e1_849a_6320c61cdd0e.slice - libcontainer container kubepods-besteffort-pod8ec5a596_8f22_44e1_849a_6320c61cdd0e.slice. Apr 16 23:30:54.886777 systemd[1]: Created slice kubepods-besteffort-pod210df17c_95d1_4cbe_97d2_f101c7dc8650.slice - libcontainer container kubepods-besteffort-pod210df17c_95d1_4cbe_97d2_f101c7dc8650.slice. Apr 16 23:30:54.892085 systemd[1]: Created slice kubepods-besteffort-pod919844db_7c0f_41a2_96b8_90a0eed8a30f.slice - libcontainer container kubepods-besteffort-pod919844db_7c0f_41a2_96b8_90a0eed8a30f.slice. Apr 16 23:30:54.963838 kubelet[3413]: I0416 23:30:54.963403 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87cv\" (UniqueName: \"kubernetes.io/projected/2aa4ae73-6822-4401-8a34-bdea0014c865-kube-api-access-v87cv\") pod \"coredns-66bc5c9577-7tlgn\" (UID: \"2aa4ae73-6822-4401-8a34-bdea0014c865\") " pod="kube-system/coredns-66bc5c9577-7tlgn" Apr 16 23:30:54.963838 kubelet[3413]: I0416 23:30:54.963442 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq7l5\" (UniqueName: \"kubernetes.io/projected/8c36f0ea-9b51-4ee6-bb00-0629e5a07aed-kube-api-access-kq7l5\") pod \"goldmane-cccfbd5cf-bd296\" (UID: \"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed\") " pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:54.963838 kubelet[3413]: I0416 23:30:54.963457 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-ca-bundle\") pod \"whisker-66b8b5fb-jhf7g\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:54.963838 kubelet[3413]: I0416 23:30:54.963468 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/210df17c-95d1-4cbe-97d2-f101c7dc8650-calico-apiserver-certs\") pod \"calico-apiserver-7d7bf747c9-rs7rf\" (UID: \"210df17c-95d1-4cbe-97d2-f101c7dc8650\") " pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" Apr 16 23:30:54.964589 kubelet[3413]: I0416 23:30:54.963481 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzct\" (UniqueName: \"kubernetes.io/projected/dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3-kube-api-access-brzct\") pod \"coredns-66bc5c9577-tpqd4\" (UID: \"dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3\") " pod="kube-system/coredns-66bc5c9577-tpqd4" Apr 16 23:30:54.964589 kubelet[3413]: I0416 23:30:54.964159 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-nginx-config\") pod \"whisker-66b8b5fb-jhf7g\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:54.964589 kubelet[3413]: I0416 23:30:54.964181 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3-config-volume\") pod \"coredns-66bc5c9577-tpqd4\" (UID: \"dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3\") " pod="kube-system/coredns-66bc5c9577-tpqd4" Apr 16 23:30:54.964589 kubelet[3413]: I0416 23:30:54.964192 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa4ae73-6822-4401-8a34-bdea0014c865-config-volume\") pod \"coredns-66bc5c9577-7tlgn\" (UID: \"2aa4ae73-6822-4401-8a34-bdea0014c865\") " pod="kube-system/coredns-66bc5c9577-7tlgn" Apr 16 23:30:54.964589 kubelet[3413]: I0416 23:30:54.964202 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c36f0ea-9b51-4ee6-bb00-0629e5a07aed-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-bd296\" (UID: \"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed\") " pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:54.964770 kubelet[3413]: I0416 23:30:54.964210 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8c36f0ea-9b51-4ee6-bb00-0629e5a07aed-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-bd296\" (UID: \"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed\") " pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:54.964770 kubelet[3413]: I0416 23:30:54.964220 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46758d21-42a0-427d-9d3c-b1598016f3d7-tigera-ca-bundle\") pod \"calico-kube-controllers-746789d58c-h6p6n\" (UID: \"46758d21-42a0-427d-9d3c-b1598016f3d7\") " pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" Apr 16 23:30:54.964770 kubelet[3413]: I0416 23:30:54.964232 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-backend-key-pair\") pod \"whisker-66b8b5fb-jhf7g\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:54.964770 kubelet[3413]: I0416 23:30:54.964242 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/919844db-7c0f-41a2-96b8-90a0eed8a30f-calico-apiserver-certs\") pod \"calico-apiserver-7d7bf747c9-sgrfm\" (UID: \"919844db-7c0f-41a2-96b8-90a0eed8a30f\") " pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" Apr 16 23:30:54.964770 kubelet[3413]: I0416 23:30:54.964250 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px27m\" (UniqueName: \"kubernetes.io/projected/919844db-7c0f-41a2-96b8-90a0eed8a30f-kube-api-access-px27m\") pod \"calico-apiserver-7d7bf747c9-sgrfm\" (UID: \"919844db-7c0f-41a2-96b8-90a0eed8a30f\") " pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" Apr 16 23:30:54.964847 kubelet[3413]: I0416 23:30:54.964263 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8pk\" (UniqueName: \"kubernetes.io/projected/46758d21-42a0-427d-9d3c-b1598016f3d7-kube-api-access-gb8pk\") pod \"calico-kube-controllers-746789d58c-h6p6n\" (UID: \"46758d21-42a0-427d-9d3c-b1598016f3d7\") " pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" Apr 16 23:30:54.964847 kubelet[3413]: I0416 23:30:54.964271 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6kl\" (UniqueName: \"kubernetes.io/projected/8ec5a596-8f22-44e1-849a-6320c61cdd0e-kube-api-access-8s6kl\") pod \"whisker-66b8b5fb-jhf7g\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:54.964847 kubelet[3413]: I0416 23:30:54.964284 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c36f0ea-9b51-4ee6-bb00-0629e5a07aed-config\") pod \"goldmane-cccfbd5cf-bd296\" (UID: \"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed\") " pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:54.964847 kubelet[3413]: I0416 23:30:54.964294 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6fm\" (UniqueName: \"kubernetes.io/projected/210df17c-95d1-4cbe-97d2-f101c7dc8650-kube-api-access-mn6fm\") pod \"calico-apiserver-7d7bf747c9-rs7rf\" (UID: \"210df17c-95d1-4cbe-97d2-f101c7dc8650\") " pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" Apr 16 23:30:55.128692 systemd[1]: Created slice kubepods-besteffort-pod92439691_ea93_41b8_af87_604eaab62246.slice - libcontainer container kubepods-besteffort-pod92439691_ea93_41b8_af87_604eaab62246.slice. Apr 16 23:30:55.137787 containerd[1891]: time="2026-04-16T23:30:55.137745061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvtzd,Uid:92439691-ea93-41b8-af87-604eaab62246,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.158906 containerd[1891]: time="2026-04-16T23:30:55.158616383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tpqd4,Uid:dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:55.166377 containerd[1891]: time="2026-04-16T23:30:55.166343881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7tlgn,Uid:2aa4ae73-6822-4401-8a34-bdea0014c865,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:55.183883 containerd[1891]: time="2026-04-16T23:30:55.183842367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746789d58c-h6p6n,Uid:46758d21-42a0-427d-9d3c-b1598016f3d7,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.192402 containerd[1891]: time="2026-04-16T23:30:55.192290162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bd296,Uid:8c36f0ea-9b51-4ee6-bb00-0629e5a07aed,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.203959 containerd[1891]: time="2026-04-16T23:30:55.203901005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8b5fb-jhf7g,Uid:8ec5a596-8f22-44e1-849a-6320c61cdd0e,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.209763 containerd[1891]: time="2026-04-16T23:30:55.209730551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-sgrfm,Uid:919844db-7c0f-41a2-96b8-90a0eed8a30f,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.211349 containerd[1891]: time="2026-04-16T23:30:55.211301798Z" level=error msg="Failed to destroy network for sandbox \"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.216070 containerd[1891]: time="2026-04-16T23:30:55.216032525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-rs7rf,Uid:210df17c-95d1-4cbe-97d2-f101c7dc8650,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:55.233453 containerd[1891]: time="2026-04-16T23:30:55.233410280Z" level=error msg="Failed to destroy network for sandbox \"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.241309 containerd[1891]: time="2026-04-16T23:30:55.241261053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvtzd,Uid:92439691-ea93-41b8-af87-604eaab62246,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.244888 kubelet[3413]: E0416 23:30:55.244675 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.245046 kubelet[3413]: E0416 23:30:55.245029 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:55.245214 kubelet[3413]: E0416 23:30:55.245103 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvtzd" Apr 16 23:30:55.249970 kubelet[3413]: E0416 23:30:55.249933 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvtzd_calico-system(92439691-ea93-41b8-af87-604eaab62246)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvtzd_calico-system(92439691-ea93-41b8-af87-604eaab62246)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5121f0820dd52ca2917d718319035e2f0fbbf11979aa5f9a62a5bb859ce437c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvtzd" podUID="92439691-ea93-41b8-af87-604eaab62246" Apr 16 23:30:55.272143 containerd[1891]: time="2026-04-16T23:30:55.272084976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tpqd4,Uid:dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.273843 kubelet[3413]: E0416 23:30:55.273088 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.273843 kubelet[3413]: E0416 23:30:55.273136 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tpqd4" Apr 16 23:30:55.273843 kubelet[3413]: E0416 23:30:55.273152 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tpqd4" Apr 16 23:30:55.274002 kubelet[3413]: E0416 23:30:55.273193 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tpqd4_kube-system(dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tpqd4_kube-system(dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d06d24fc3bcf8f678f31dacac9fbdb2183615db5be4634a35c2a963ad167300e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tpqd4" podUID="dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3" Apr 16 23:30:55.290742 containerd[1891]: time="2026-04-16T23:30:55.290691954Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:30:55.302869 containerd[1891]: time="2026-04-16T23:30:55.302820194Z" level=error msg="Failed to destroy network for sandbox \"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.317053 containerd[1891]: time="2026-04-16T23:30:55.316829921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7tlgn,Uid:2aa4ae73-6822-4401-8a34-bdea0014c865,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.317829 kubelet[3413]: E0416 23:30:55.317463 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.317829 kubelet[3413]: E0416 23:30:55.317524 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7tlgn" Apr 16 23:30:55.317829 kubelet[3413]: E0416 23:30:55.317546 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7tlgn" Apr 16 23:30:55.318604 kubelet[3413]: E0416 23:30:55.317595 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7tlgn_kube-system(2aa4ae73-6822-4401-8a34-bdea0014c865)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7tlgn_kube-system(2aa4ae73-6822-4401-8a34-bdea0014c865)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99e05292cf60eba3b05db1195d6746b2f3a8a0266ae914594c1f6c74a9c1c8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7tlgn" podUID="2aa4ae73-6822-4401-8a34-bdea0014c865" Apr 16 23:30:55.339380 containerd[1891]: time="2026-04-16T23:30:55.339331900Z" level=info msg="Container 93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:55.343812 containerd[1891]: time="2026-04-16T23:30:55.343770699Z" level=error msg="Failed to destroy network for sandbox \"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.367742 containerd[1891]: time="2026-04-16T23:30:55.367622432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746789d58c-h6p6n,Uid:46758d21-42a0-427d-9d3c-b1598016f3d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.368036 kubelet[3413]: E0416 23:30:55.368009 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.369512 kubelet[3413]: E0416 23:30:55.368131 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" Apr 16 23:30:55.369512 kubelet[3413]: E0416 23:30:55.368153 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" Apr 16 23:30:55.369512 kubelet[3413]: E0416 23:30:55.368200 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746789d58c-h6p6n_calico-system(46758d21-42a0-427d-9d3c-b1598016f3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746789d58c-h6p6n_calico-system(46758d21-42a0-427d-9d3c-b1598016f3d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20a8a3516028b174aebe2c0c8de0422bdc77520f6a42fa6a6375b5caa7ec16f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" podUID="46758d21-42a0-427d-9d3c-b1598016f3d7" Apr 16 23:30:55.375688 containerd[1891]: time="2026-04-16T23:30:55.375647473Z" level=info msg="CreateContainer within sandbox \"a6ee51393024b0b9387447676cb89829f56260cd857b8595e63eb6b9bd486599\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a\"" Apr 16 23:30:55.376774 containerd[1891]: time="2026-04-16T23:30:55.376752661Z" level=info msg="StartContainer for \"93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a\"" Apr 16 23:30:55.401140 containerd[1891]: time="2026-04-16T23:30:55.400751886Z" level=error msg="Failed to destroy network for sandbox \"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.402740 containerd[1891]: time="2026-04-16T23:30:55.402688030Z" level=error msg="Failed to destroy network for sandbox \"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.403954 containerd[1891]: time="2026-04-16T23:30:55.403912341Z" level=info msg="connecting to shim 93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a" address="unix:///run/containerd/s/bd69de76d8bd3d7284cd0b44985a0010e6618611ac3200f03622cb13ef944319" protocol=ttrpc version=3 Apr 16 23:30:55.406421 containerd[1891]: time="2026-04-16T23:30:55.406387395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-sgrfm,Uid:919844db-7c0f-41a2-96b8-90a0eed8a30f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.407663 kubelet[3413]: E0416 23:30:55.407619 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.407758 kubelet[3413]: E0416 23:30:55.407681 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" Apr 16 23:30:55.407758 kubelet[3413]: E0416 23:30:55.407698 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" Apr 16 23:30:55.407847 kubelet[3413]: E0416 23:30:55.407748 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d7bf747c9-sgrfm_calico-system(919844db-7c0f-41a2-96b8-90a0eed8a30f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d7bf747c9-sgrfm_calico-system(919844db-7c0f-41a2-96b8-90a0eed8a30f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122c56b4f1d11dd050982fc5aecd5bc9c3ebc8e5af375538526f95dc2b609bab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" podUID="919844db-7c0f-41a2-96b8-90a0eed8a30f" Apr 16 23:30:55.409436 containerd[1891]: time="2026-04-16T23:30:55.409394278Z" level=error msg="Failed to destroy network for sandbox \"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.411999 containerd[1891]: time="2026-04-16T23:30:55.411600534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8b5fb-jhf7g,Uid:8ec5a596-8f22-44e1-849a-6320c61cdd0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.412267 kubelet[3413]: E0416 23:30:55.412216 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.412334 kubelet[3413]: E0416 23:30:55.412288 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:55.412334 kubelet[3413]: E0416 23:30:55.412305 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8b5fb-jhf7g" Apr 16 23:30:55.412378 kubelet[3413]: E0416 23:30:55.412356 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66b8b5fb-jhf7g_calico-system(8ec5a596-8f22-44e1-849a-6320c61cdd0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66b8b5fb-jhf7g_calico-system(8ec5a596-8f22-44e1-849a-6320c61cdd0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd09a183276b9618b69d6d324c28c454fa6916acc842f6a08f8297d760525b27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66b8b5fb-jhf7g" podUID="8ec5a596-8f22-44e1-849a-6320c61cdd0e" Apr 16 23:30:55.415643 containerd[1891]: time="2026-04-16T23:30:55.415596466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-rs7rf,Uid:210df17c-95d1-4cbe-97d2-f101c7dc8650,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.416135 kubelet[3413]: E0416 23:30:55.415796 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.416135 kubelet[3413]: E0416 23:30:55.415860 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" Apr 16 23:30:55.416135 kubelet[3413]: E0416 23:30:55.415874 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" Apr 16 23:30:55.416215 kubelet[3413]: E0416 23:30:55.415925 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d7bf747c9-rs7rf_calico-system(210df17c-95d1-4cbe-97d2-f101c7dc8650)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d7bf747c9-rs7rf_calico-system(210df17c-95d1-4cbe-97d2-f101c7dc8650)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2907eec21c68dbd8100ee139814bd4f5f155b06bb9aa7884eab0caea8227b414\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" podUID="210df17c-95d1-4cbe-97d2-f101c7dc8650" Apr 16 23:30:55.418956 containerd[1891]: time="2026-04-16T23:30:55.418857963Z" level=error msg="Failed to destroy network for sandbox \"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.425290 containerd[1891]: time="2026-04-16T23:30:55.425235115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bd296,Uid:8c36f0ea-9b51-4ee6-bb00-0629e5a07aed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.426983 kubelet[3413]: E0416 23:30:55.426626 3413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:55.426983 kubelet[3413]: E0416 23:30:55.426873 3413 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:55.427468 kubelet[3413]: E0416 23:30:55.427264 3413 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-bd296" Apr 16 23:30:55.427663 kubelet[3413]: E0416 23:30:55.427573 3413 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-bd296_calico-system(8c36f0ea-9b51-4ee6-bb00-0629e5a07aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-bd296_calico-system(8c36f0ea-9b51-4ee6-bb00-0629e5a07aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bef6a3d0bab9ec44f50c7d813361f919fbca8a92f83cdbf6f052b3b0e6fbdf01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-bd296" podUID="8c36f0ea-9b51-4ee6-bb00-0629e5a07aed" Apr 16 23:30:55.429787 systemd[1]: Started cri-containerd-93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a.scope - libcontainer container 93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a. Apr 16 23:30:55.485574 containerd[1891]: time="2026-04-16T23:30:55.485538961Z" level=info msg="StartContainer for \"93cafff01aa27e49fcc9d967bd4f550bbced1d7f2e7458ad01b9e97a1e684f5a\" returns successfully" Apr 16 23:30:56.303161 kubelet[3413]: I0416 23:30:56.302020 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xg97p" podStartSLOduration=4.782929412 podStartE2EDuration="21.302003948s" podCreationTimestamp="2026-04-16 23:30:35 +0000 UTC" firstStartedPulling="2026-04-16 23:30:36.144464028 +0000 UTC m=+19.127947297" lastFinishedPulling="2026-04-16 23:30:52.663538572 +0000 UTC m=+35.647021833" observedRunningTime="2026-04-16 23:30:56.301816959 +0000 UTC m=+39.285300260" watchObservedRunningTime="2026-04-16 23:30:56.302003948 +0000 UTC m=+39.285487209" Apr 16 23:30:56.373628 kubelet[3413]: I0416 23:30:56.373586 3413 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s6kl\" (UniqueName: \"kubernetes.io/projected/8ec5a596-8f22-44e1-849a-6320c61cdd0e-kube-api-access-8s6kl\") pod \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " Apr 16 23:30:56.374544 kubelet[3413]: I0416 23:30:56.373993 3413 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-ca-bundle\") pod \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " Apr 16 23:30:56.374544 kubelet[3413]: I0416 23:30:56.374014 3413 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-nginx-config\") pod \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " Apr 16 23:30:56.374544 kubelet[3413]: I0416 23:30:56.374027 3413 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-backend-key-pair\") pod \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\" (UID: \"8ec5a596-8f22-44e1-849a-6320c61cdd0e\") " Apr 16 23:30:56.374651 kubelet[3413]: I0416 23:30:56.374613 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8ec5a596-8f22-44e1-849a-6320c61cdd0e" (UID: "8ec5a596-8f22-44e1-849a-6320c61cdd0e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:30:56.374766 kubelet[3413]: I0416 23:30:56.374746 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "8ec5a596-8f22-44e1-849a-6320c61cdd0e" (UID: "8ec5a596-8f22-44e1-849a-6320c61cdd0e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:30:56.378328 kubelet[3413]: I0416 23:30:56.377730 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec5a596-8f22-44e1-849a-6320c61cdd0e-kube-api-access-8s6kl" (OuterVolumeSpecName: "kube-api-access-8s6kl") pod "8ec5a596-8f22-44e1-849a-6320c61cdd0e" (UID: "8ec5a596-8f22-44e1-849a-6320c61cdd0e"). InnerVolumeSpecName "kube-api-access-8s6kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:30:56.377866 systemd[1]: var-lib-kubelet-pods-8ec5a596\x2d8f22\x2d44e1\x2d849a\x2d6320c61cdd0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8s6kl.mount: Deactivated successfully. Apr 16 23:30:56.379905 kubelet[3413]: I0416 23:30:56.379874 3413 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8ec5a596-8f22-44e1-849a-6320c61cdd0e" (UID: "8ec5a596-8f22-44e1-849a-6320c61cdd0e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:30:56.380346 systemd[1]: var-lib-kubelet-pods-8ec5a596\x2d8f22\x2d44e1\x2d849a\x2d6320c61cdd0e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:30:56.474872 kubelet[3413]: I0416 23:30:56.474802 3413 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8s6kl\" (UniqueName: \"kubernetes.io/projected/8ec5a596-8f22-44e1-849a-6320c61cdd0e-kube-api-access-8s6kl\") on node \"ci-4459.2.4-n-b3358a4beb\" DevicePath \"\"" Apr 16 23:30:56.474872 kubelet[3413]: I0416 23:30:56.474837 3413 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-ca-bundle\") on node \"ci-4459.2.4-n-b3358a4beb\" DevicePath \"\"" Apr 16 23:30:56.474872 kubelet[3413]: I0416 23:30:56.474844 3413 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8ec5a596-8f22-44e1-849a-6320c61cdd0e-nginx-config\") on node \"ci-4459.2.4-n-b3358a4beb\" DevicePath \"\"" Apr 16 23:30:56.474872 kubelet[3413]: I0416 23:30:56.474851 3413 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ec5a596-8f22-44e1-849a-6320c61cdd0e-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-b3358a4beb\" DevicePath \"\"" Apr 16 23:30:57.132404 systemd[1]: Removed slice kubepods-besteffort-pod8ec5a596_8f22_44e1_849a_6320c61cdd0e.slice - libcontainer container kubepods-besteffort-pod8ec5a596_8f22_44e1_849a_6320c61cdd0e.slice. Apr 16 23:30:57.381502 systemd[1]: Created slice kubepods-besteffort-podc5658e3f_6dd8_48a1_b493_7a4f4f417940.slice - libcontainer container kubepods-besteffort-podc5658e3f_6dd8_48a1_b493_7a4f4f417940.slice. Apr 16 23:30:57.482321 kubelet[3413]: I0416 23:30:57.482199 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5658e3f-6dd8-48a1-b493-7a4f4f417940-whisker-backend-key-pair\") pod \"whisker-68c6c5d7b9-9pkfn\" (UID: \"c5658e3f-6dd8-48a1-b493-7a4f4f417940\") " pod="calico-system/whisker-68c6c5d7b9-9pkfn" Apr 16 23:30:57.483121 kubelet[3413]: I0416 23:30:57.482721 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98752\" (UniqueName: \"kubernetes.io/projected/c5658e3f-6dd8-48a1-b493-7a4f4f417940-kube-api-access-98752\") pod \"whisker-68c6c5d7b9-9pkfn\" (UID: \"c5658e3f-6dd8-48a1-b493-7a4f4f417940\") " pod="calico-system/whisker-68c6c5d7b9-9pkfn" Apr 16 23:30:57.483121 kubelet[3413]: I0416 23:30:57.482766 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5658e3f-6dd8-48a1-b493-7a4f4f417940-nginx-config\") pod \"whisker-68c6c5d7b9-9pkfn\" (UID: \"c5658e3f-6dd8-48a1-b493-7a4f4f417940\") " pod="calico-system/whisker-68c6c5d7b9-9pkfn" Apr 16 23:30:57.483121 kubelet[3413]: I0416 23:30:57.482805 3413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5658e3f-6dd8-48a1-b493-7a4f4f417940-whisker-ca-bundle\") pod \"whisker-68c6c5d7b9-9pkfn\" (UID: \"c5658e3f-6dd8-48a1-b493-7a4f4f417940\") " pod="calico-system/whisker-68c6c5d7b9-9pkfn" Apr 16 23:30:57.531298 systemd-networkd[1480]: vxlan.calico: Link UP Apr 16 23:30:57.531305 systemd-networkd[1480]: vxlan.calico: Gained carrier Apr 16 23:30:57.691500 containerd[1891]: time="2026-04-16T23:30:57.691451666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68c6c5d7b9-9pkfn,Uid:c5658e3f-6dd8-48a1-b493-7a4f4f417940,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:57.825444 systemd-networkd[1480]: calidf875666f85: Link UP Apr 16 23:30:57.826802 systemd-networkd[1480]: calidf875666f85: Gained carrier Apr 16 23:30:57.847996 containerd[1891]: 2026-04-16 23:30:57.755 [INFO][4702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0 whisker-68c6c5d7b9- calico-system c5658e3f-6dd8-48a1-b493-7a4f4f417940 885 0 2026-04-16 23:30:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68c6c5d7b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb whisker-68c6c5d7b9-9pkfn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf875666f85 [] [] }} ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-" Apr 16 23:30:57.847996 containerd[1891]: 2026-04-16 23:30:57.755 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.847996 containerd[1891]: 2026-04-16 23:30:57.775 [INFO][4716] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" HandleID="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Workload="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.781 [INFO][4716] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" HandleID="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Workload="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"whisker-68c6c5d7b9-9pkfn", "timestamp":"2026-04-16 23:30:57.775594789 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003aaf20)} Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.781 [INFO][4716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.781 [INFO][4716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.781 [INFO][4716] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.783 [INFO][4716] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.787 [INFO][4716] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.791 [INFO][4716] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.795 [INFO][4716] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848203 containerd[1891]: 2026-04-16 23:30:57.797 [INFO][4716] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.797 [INFO][4716] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.798 [INFO][4716] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180 Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.807 [INFO][4716] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.812 [INFO][4716] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.193/26] block=192.168.57.192/26 handle="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.813 [INFO][4716] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.193/26] handle="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.813 [INFO][4716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:57.848337 containerd[1891]: 2026-04-16 23:30:57.813 [INFO][4716] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.193/26] IPv6=[] ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" HandleID="k8s-pod-network.a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Workload="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.849081 containerd[1891]: 2026-04-16 23:30:57.819 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0", GenerateName:"whisker-68c6c5d7b9-", Namespace:"calico-system", SelfLink:"", UID:"c5658e3f-6dd8-48a1-b493-7a4f4f417940", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68c6c5d7b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"whisker-68c6c5d7b9-9pkfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf875666f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:57.849081 containerd[1891]: 2026-04-16 23:30:57.819 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.193/32] ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.849149 containerd[1891]: 2026-04-16 23:30:57.819 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf875666f85 ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.849149 containerd[1891]: 2026-04-16 23:30:57.827 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.849177 containerd[1891]: 2026-04-16 23:30:57.827 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0", GenerateName:"whisker-68c6c5d7b9-", Namespace:"calico-system", SelfLink:"", UID:"c5658e3f-6dd8-48a1-b493-7a4f4f417940", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68c6c5d7b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180", Pod:"whisker-68c6c5d7b9-9pkfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf875666f85", MAC:"56:f0:7d:f8:1b:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:57.849212 containerd[1891]: 2026-04-16 23:30:57.845 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" Namespace="calico-system" Pod="whisker-68c6c5d7b9-9pkfn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-whisker--68c6c5d7b9--9pkfn-eth0" Apr 16 23:30:57.894408 containerd[1891]: time="2026-04-16T23:30:57.894316361Z" level=info msg="connecting to shim a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180" address="unix:///run/containerd/s/7076625fc0f8c61960b5538cbda71f7c11607e5f09e52562e03223fbf4f2f537" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:57.921658 systemd[1]: Started cri-containerd-a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180.scope - libcontainer container a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180. Apr 16 23:30:57.959606 containerd[1891]: time="2026-04-16T23:30:57.959569803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68c6c5d7b9-9pkfn,Uid:c5658e3f-6dd8-48a1-b493-7a4f4f417940,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180\"" Apr 16 23:30:57.961669 containerd[1891]: time="2026-04-16T23:30:57.961612110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:30:59.124824 kubelet[3413]: I0416 23:30:59.124682 3413 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec5a596-8f22-44e1-849a-6320c61cdd0e" path="/var/lib/kubelet/pods/8ec5a596-8f22-44e1-849a-6320c61cdd0e/volumes" Apr 16 23:30:59.241733 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Apr 16 23:30:59.242586 systemd-networkd[1480]: calidf875666f85: Gained IPv6LL Apr 16 23:30:59.529624 containerd[1891]: time="2026-04-16T23:30:59.529494240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:59.533396 containerd[1891]: time="2026-04-16T23:30:59.533192668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 23:30:59.540994 containerd[1891]: time="2026-04-16T23:30:59.540941942Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:59.545304 containerd[1891]: time="2026-04-16T23:30:59.545256786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:59.545780 containerd[1891]: time="2026-04-16T23:30:59.545638988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.584000421s" Apr 16 23:30:59.545780 containerd[1891]: time="2026-04-16T23:30:59.545666885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 23:30:59.554681 containerd[1891]: time="2026-04-16T23:30:59.554647605Z" level=info msg="CreateContainer within sandbox \"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:30:59.579186 containerd[1891]: time="2026-04-16T23:30:59.579134099Z" level=info msg="Container 5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:59.579954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668114229.mount: Deactivated successfully. Apr 16 23:30:59.605057 containerd[1891]: time="2026-04-16T23:30:59.604935785Z" level=info msg="CreateContainer within sandbox \"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8\"" Apr 16 23:30:59.606680 containerd[1891]: time="2026-04-16T23:30:59.606655004Z" level=info msg="StartContainer for \"5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8\"" Apr 16 23:30:59.607779 containerd[1891]: time="2026-04-16T23:30:59.607752543Z" level=info msg="connecting to shim 5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8" address="unix:///run/containerd/s/7076625fc0f8c61960b5538cbda71f7c11607e5f09e52562e03223fbf4f2f537" protocol=ttrpc version=3 Apr 16 23:30:59.626077 systemd[1]: Started cri-containerd-5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8.scope - libcontainer container 5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8. Apr 16 23:30:59.659413 containerd[1891]: time="2026-04-16T23:30:59.659351915Z" level=info msg="StartContainer for \"5ff5bb216733977dfd39bd7498e7bc7a07bddb8c42a74b55737abb513bac80a8\" returns successfully" Apr 16 23:30:59.661877 containerd[1891]: time="2026-04-16T23:30:59.661688318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:31:01.316659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254248084.mount: Deactivated successfully. Apr 16 23:31:01.406590 containerd[1891]: time="2026-04-16T23:31:01.406536566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:01.409629 containerd[1891]: time="2026-04-16T23:31:01.409460575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 23:31:01.413024 containerd[1891]: time="2026-04-16T23:31:01.412995976Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:01.418792 containerd[1891]: time="2026-04-16T23:31:01.418693998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:01.419205 containerd[1891]: time="2026-04-16T23:31:01.419077112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.757355618s" Apr 16 23:31:01.419205 containerd[1891]: time="2026-04-16T23:31:01.419109705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 23:31:01.428927 containerd[1891]: time="2026-04-16T23:31:01.428840748Z" level=info msg="CreateContainer within sandbox \"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:31:01.453018 containerd[1891]: time="2026-04-16T23:31:01.452120403Z" level=info msg="Container 964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:01.456659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2498535476.mount: Deactivated successfully. Apr 16 23:31:01.474471 containerd[1891]: time="2026-04-16T23:31:01.474426706Z" level=info msg="CreateContainer within sandbox \"a5db9341ddebe1d73a39371f08225713e33fb412f815c84961ee9e43bc157180\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66\"" Apr 16 23:31:01.475507 containerd[1891]: time="2026-04-16T23:31:01.475462372Z" level=info msg="StartContainer for \"964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66\"" Apr 16 23:31:01.476843 containerd[1891]: time="2026-04-16T23:31:01.476804397Z" level=info msg="connecting to shim 964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66" address="unix:///run/containerd/s/7076625fc0f8c61960b5538cbda71f7c11607e5f09e52562e03223fbf4f2f537" protocol=ttrpc version=3 Apr 16 23:31:01.496656 systemd[1]: Started cri-containerd-964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66.scope - libcontainer container 964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66. Apr 16 23:31:01.535974 containerd[1891]: time="2026-04-16T23:31:01.535933910Z" level=info msg="StartContainer for \"964b7b3ad4f79b05653a1c09cddd5daf366a04b365372a6b4d5c83e5c31adb66\" returns successfully" Apr 16 23:31:02.301874 kubelet[3413]: I0416 23:31:02.301813 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68c6c5d7b9-9pkfn" podStartSLOduration=1.8428532469999999 podStartE2EDuration="5.301795302s" podCreationTimestamp="2026-04-16 23:30:57 +0000 UTC" firstStartedPulling="2026-04-16 23:30:57.961202748 +0000 UTC m=+40.944686009" lastFinishedPulling="2026-04-16 23:31:01.420144803 +0000 UTC m=+44.403628064" observedRunningTime="2026-04-16 23:31:02.301279705 +0000 UTC m=+45.284762966" watchObservedRunningTime="2026-04-16 23:31:02.301795302 +0000 UTC m=+45.285278563" Apr 16 23:31:06.132643 containerd[1891]: time="2026-04-16T23:31:06.132599140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvtzd,Uid:92439691-ea93-41b8-af87-604eaab62246,Namespace:calico-system,Attempt:0,}" Apr 16 23:31:06.240984 systemd-networkd[1480]: cali1251d6a589b: Link UP Apr 16 23:31:06.241128 systemd-networkd[1480]: cali1251d6a589b: Gained carrier Apr 16 23:31:06.261156 containerd[1891]: 2026-04-16 23:31:06.174 [INFO][4913] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0 csi-node-driver- calico-system 92439691-ea93-41b8-af87-604eaab62246 691 0 2026-04-16 23:30:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb csi-node-driver-kvtzd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1251d6a589b [] [] }} ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-" Apr 16 23:31:06.261156 containerd[1891]: 2026-04-16 23:31:06.174 [INFO][4913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261156 containerd[1891]: 2026-04-16 23:31:06.194 [INFO][4927] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" HandleID="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Workload="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.201 [INFO][4927] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" HandleID="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Workload="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"csi-node-driver-kvtzd", "timestamp":"2026-04-16 23:31:06.194960427 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b4f20)} Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.201 [INFO][4927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.201 [INFO][4927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.201 [INFO][4927] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.203 [INFO][4927] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.207 [INFO][4927] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.214 [INFO][4927] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.216 [INFO][4927] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261365 containerd[1891]: 2026-04-16 23:31:06.218 [INFO][4927] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.218 [INFO][4927] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.221 [INFO][4927] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.226 [INFO][4927] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.235 [INFO][4927] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.194/26] block=192.168.57.192/26 handle="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.236 [INFO][4927] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.194/26] handle="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.236 [INFO][4927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:06.261678 containerd[1891]: 2026-04-16 23:31:06.236 [INFO][4927] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.194/26] IPv6=[] ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" HandleID="k8s-pod-network.60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Workload="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261816 containerd[1891]: 2026-04-16 23:31:06.238 [INFO][4913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92439691-ea93-41b8-af87-604eaab62246", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"csi-node-driver-kvtzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1251d6a589b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:06.261861 containerd[1891]: 2026-04-16 23:31:06.238 [INFO][4913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.194/32] ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261861 containerd[1891]: 2026-04-16 23:31:06.238 [INFO][4913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1251d6a589b ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261861 containerd[1891]: 2026-04-16 23:31:06.241 [INFO][4913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.261925 containerd[1891]: 2026-04-16 23:31:06.242 [INFO][4913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92439691-ea93-41b8-af87-604eaab62246", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f", Pod:"csi-node-driver-kvtzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1251d6a589b", MAC:"26:8d:bb:28:28:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:06.261977 containerd[1891]: 2026-04-16 23:31:06.256 [INFO][4913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" Namespace="calico-system" Pod="csi-node-driver-kvtzd" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-csi--node--driver--kvtzd-eth0" Apr 16 23:31:06.321314 containerd[1891]: time="2026-04-16T23:31:06.321208343Z" level=info msg="connecting to shim 60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f" address="unix:///run/containerd/s/8e1f1550696323fb448c4057b0f3761710c1fe5c532cd996706446f6d035c312" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:06.341629 systemd[1]: Started cri-containerd-60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f.scope - libcontainer container 60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f. Apr 16 23:31:06.379829 containerd[1891]: time="2026-04-16T23:31:06.379780192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvtzd,Uid:92439691-ea93-41b8-af87-604eaab62246,Namespace:calico-system,Attempt:0,} returns sandbox id \"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f\"" Apr 16 23:31:06.381672 containerd[1891]: time="2026-04-16T23:31:06.381639406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:31:07.132496 containerd[1891]: time="2026-04-16T23:31:07.132435917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bd296,Uid:8c36f0ea-9b51-4ee6-bb00-0629e5a07aed,Namespace:calico-system,Attempt:0,}" Apr 16 23:31:07.140376 containerd[1891]: time="2026-04-16T23:31:07.138159523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tpqd4,Uid:dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3,Namespace:kube-system,Attempt:0,}" Apr 16 23:31:07.263379 systemd-networkd[1480]: cali7ddda6a7e78: Link UP Apr 16 23:31:07.264462 systemd-networkd[1480]: cali7ddda6a7e78: Gained carrier Apr 16 23:31:07.284770 containerd[1891]: 2026-04-16 23:31:07.180 [INFO][5003] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0 goldmane-cccfbd5cf- calico-system 8c36f0ea-9b51-4ee6-bb00-0629e5a07aed 828 0 2026-04-16 23:30:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb goldmane-cccfbd5cf-bd296 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7ddda6a7e78 [] [] }} ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-" Apr 16 23:31:07.284770 containerd[1891]: 2026-04-16 23:31:07.180 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.284770 containerd[1891]: 2026-04-16 23:31:07.209 [INFO][5027] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" HandleID="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Workload="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.216 [INFO][5027] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" HandleID="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Workload="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"goldmane-cccfbd5cf-bd296", "timestamp":"2026-04-16 23:31:07.209881067 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.216 [INFO][5027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.217 [INFO][5027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.217 [INFO][5027] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.220 [INFO][5027] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.226 [INFO][5027] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.231 [INFO][5027] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.233 [INFO][5027] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285093 containerd[1891]: 2026-04-16 23:31:07.235 [INFO][5027] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.235 [INFO][5027] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.236 [INFO][5027] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5 Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.245 [INFO][5027] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.257 [INFO][5027] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.195/26] block=192.168.57.192/26 handle="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.257 [INFO][5027] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.195/26] handle="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.257 [INFO][5027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:07.285297 containerd[1891]: 2026-04-16 23:31:07.257 [INFO][5027] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.195/26] IPv6=[] ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" HandleID="k8s-pod-network.14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Workload="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.285427 containerd[1891]: 2026-04-16 23:31:07.260 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"goldmane-cccfbd5cf-bd296", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ddda6a7e78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:07.285427 containerd[1891]: 2026-04-16 23:31:07.261 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.195/32] ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.285521 containerd[1891]: 2026-04-16 23:31:07.261 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ddda6a7e78 ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.285521 containerd[1891]: 2026-04-16 23:31:07.264 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.285659 containerd[1891]: 2026-04-16 23:31:07.266 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8c36f0ea-9b51-4ee6-bb00-0629e5a07aed", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5", Pod:"goldmane-cccfbd5cf-bd296", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ddda6a7e78", MAC:"ea:f5:4a:72:a8:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:07.285721 containerd[1891]: 2026-04-16 23:31:07.281 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bd296" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-goldmane--cccfbd5cf--bd296-eth0" Apr 16 23:31:07.339611 containerd[1891]: time="2026-04-16T23:31:07.339518915Z" level=info msg="connecting to shim 14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5" address="unix:///run/containerd/s/35cd50242a694978d19c34446452e2d1c3bc80b7969bde7996dd87914a918fe1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:07.372656 systemd[1]: Started cri-containerd-14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5.scope - libcontainer container 14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5. Apr 16 23:31:07.383251 systemd-networkd[1480]: califd02911aa3a: Link UP Apr 16 23:31:07.383435 systemd-networkd[1480]: califd02911aa3a: Gained carrier Apr 16 23:31:07.404173 containerd[1891]: 2026-04-16 23:31:07.211 [INFO][5014] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0 coredns-66bc5c9577- kube-system dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3 822 0 2026-04-16 23:30:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb coredns-66bc5c9577-tpqd4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd02911aa3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-" Apr 16 23:31:07.404173 containerd[1891]: 2026-04-16 23:31:07.211 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.404173 containerd[1891]: 2026-04-16 23:31:07.236 [INFO][5035] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" HandleID="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.246 [INFO][5035] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" HandleID="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"coredns-66bc5c9577-tpqd4", "timestamp":"2026-04-16 23:31:07.236714086 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003e51e0)} Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.246 [INFO][5035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.257 [INFO][5035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.258 [INFO][5035] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.320 [INFO][5035] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.329 [INFO][5035] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.336 [INFO][5035] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.342 [INFO][5035] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404504 containerd[1891]: 2026-04-16 23:31:07.351 [INFO][5035] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.352 [INFO][5035] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.354 [INFO][5035] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0 Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.363 [INFO][5035] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.373 [INFO][5035] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.196/26] block=192.168.57.192/26 handle="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.373 [INFO][5035] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.196/26] handle="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.373 [INFO][5035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:07.404857 containerd[1891]: 2026-04-16 23:31:07.373 [INFO][5035] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.196/26] IPv6=[] ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" HandleID="k8s-pod-network.9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.405258 containerd[1891]: 2026-04-16 23:31:07.376 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"coredns-66bc5c9577-tpqd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd02911aa3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:07.405258 containerd[1891]: 2026-04-16 23:31:07.376 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.196/32] ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.405258 containerd[1891]: 2026-04-16 23:31:07.376 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd02911aa3a ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.405258 containerd[1891]: 2026-04-16 23:31:07.384 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.405258 containerd[1891]: 2026-04-16 23:31:07.385 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0", Pod:"coredns-66bc5c9577-tpqd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd02911aa3a", MAC:"4e:dd:3f:0d:e9:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:07.405402 containerd[1891]: 2026-04-16 23:31:07.401 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" Namespace="kube-system" Pod="coredns-66bc5c9577-tpqd4" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--tpqd4-eth0" Apr 16 23:31:07.447048 containerd[1891]: time="2026-04-16T23:31:07.446780206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bd296,Uid:8c36f0ea-9b51-4ee6-bb00-0629e5a07aed,Namespace:calico-system,Attempt:0,} returns sandbox id \"14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5\"" Apr 16 23:31:07.466078 containerd[1891]: time="2026-04-16T23:31:07.465940651Z" level=info msg="connecting to shim 9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0" address="unix:///run/containerd/s/64b36f282d22f784478bdd4dbcf4248182c6027b7987d2450f12438692eb11b8" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:07.486676 systemd[1]: Started cri-containerd-9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0.scope - libcontainer container 9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0. Apr 16 23:31:07.522072 containerd[1891]: time="2026-04-16T23:31:07.522006509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tpqd4,Uid:dfde2fc8-fcb9-4ca1-b70f-520a1c61e6a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0\"" Apr 16 23:31:07.533956 containerd[1891]: time="2026-04-16T23:31:07.533903453Z" level=info msg="CreateContainer within sandbox \"9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:31:07.562207 containerd[1891]: time="2026-04-16T23:31:07.562160164Z" level=info msg="Container 4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:07.582988 containerd[1891]: time="2026-04-16T23:31:07.582943617Z" level=info msg="CreateContainer within sandbox \"9c354b5bb8e33384d3c5cdfc7d6ec4d75ad1497b9ccbd94ffb7cb31af766c4f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f\"" Apr 16 23:31:07.583578 containerd[1891]: time="2026-04-16T23:31:07.583560528Z" level=info msg="StartContainer for \"4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f\"" Apr 16 23:31:07.584696 containerd[1891]: time="2026-04-16T23:31:07.584579113Z" level=info msg="connecting to shim 4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f" address="unix:///run/containerd/s/64b36f282d22f784478bdd4dbcf4248182c6027b7987d2450f12438692eb11b8" protocol=ttrpc version=3 Apr 16 23:31:07.602658 systemd[1]: Started cri-containerd-4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f.scope - libcontainer container 4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f. Apr 16 23:31:07.625888 systemd-networkd[1480]: cali1251d6a589b: Gained IPv6LL Apr 16 23:31:07.636528 containerd[1891]: time="2026-04-16T23:31:07.634977078Z" level=info msg="StartContainer for \"4630bcdde9b2f86ca89e4b8da55e8ce77e2a27f251b8bf954f1d2ce9f30ef80f\" returns successfully" Apr 16 23:31:08.131009 containerd[1891]: time="2026-04-16T23:31:08.130954277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-sgrfm,Uid:919844db-7c0f-41a2-96b8-90a0eed8a30f,Namespace:calico-system,Attempt:0,}" Apr 16 23:31:08.142661 containerd[1891]: time="2026-04-16T23:31:08.142622607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746789d58c-h6p6n,Uid:46758d21-42a0-427d-9d3c-b1598016f3d7,Namespace:calico-system,Attempt:0,}" Apr 16 23:31:08.325940 systemd-networkd[1480]: calidf545d893c2: Link UP Apr 16 23:31:08.326952 systemd-networkd[1480]: calidf545d893c2: Gained carrier Apr 16 23:31:08.327597 containerd[1891]: time="2026-04-16T23:31:08.327262775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:08.334675 containerd[1891]: time="2026-04-16T23:31:08.334630990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 23:31:08.335136 containerd[1891]: time="2026-04-16T23:31:08.335094706Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:08.340368 containerd[1891]: time="2026-04-16T23:31:08.340327900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:08.341031 containerd[1891]: time="2026-04-16T23:31:08.340933547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.959266781s" Apr 16 23:31:08.341031 containerd[1891]: time="2026-04-16T23:31:08.340969852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 23:31:08.343282 containerd[1891]: time="2026-04-16T23:31:08.343261789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:31:08.351509 kubelet[3413]: I0416 23:31:08.350689 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tpqd4" podStartSLOduration=44.350672661 podStartE2EDuration="44.350672661s" podCreationTimestamp="2026-04-16 23:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:31:08.349392581 +0000 UTC m=+51.332875858" watchObservedRunningTime="2026-04-16 23:31:08.350672661 +0000 UTC m=+51.334155922" Apr 16 23:31:08.352529 containerd[1891]: time="2026-04-16T23:31:08.352037071Z" level=info msg="CreateContainer within sandbox \"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.202 [INFO][5229] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0 calico-apiserver-7d7bf747c9- calico-system 919844db-7c0f-41a2-96b8-90a0eed8a30f 831 0 2026-04-16 23:30:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7bf747c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb calico-apiserver-7d7bf747c9-sgrfm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidf545d893c2 [] [] }} ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.203 [INFO][5229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.248 [INFO][5256] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" HandleID="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.258 [INFO][5256] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" HandleID="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"calico-apiserver-7d7bf747c9-sgrfm", "timestamp":"2026-04-16 23:31:08.248246346 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186dc0)} Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.258 [INFO][5256] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.258 [INFO][5256] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.258 [INFO][5256] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.260 [INFO][5256] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.265 [INFO][5256] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.272 [INFO][5256] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.275 [INFO][5256] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.278 [INFO][5256] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.278 [INFO][5256] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.281 [INFO][5256] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1 Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.288 [INFO][5256] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5256] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.197/26] block=192.168.57.192/26 handle="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5256] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.197/26] handle="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5256] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:08.363454 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5256] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.197/26] IPv6=[] ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" HandleID="k8s-pod-network.c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.321 [INFO][5229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0", GenerateName:"calico-apiserver-7d7bf747c9-", Namespace:"calico-system", SelfLink:"", UID:"919844db-7c0f-41a2-96b8-90a0eed8a30f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7bf747c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"calico-apiserver-7d7bf747c9-sgrfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf545d893c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.321 [INFO][5229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.197/32] ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.321 [INFO][5229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf545d893c2 ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.327 [INFO][5229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.329 [INFO][5229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0", GenerateName:"calico-apiserver-7d7bf747c9-", Namespace:"calico-system", SelfLink:"", UID:"919844db-7c0f-41a2-96b8-90a0eed8a30f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7bf747c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1", Pod:"calico-apiserver-7d7bf747c9-sgrfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf545d893c2", MAC:"2e:43:4f:34:65:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:08.364357 containerd[1891]: 2026-04-16 23:31:08.358 [INFO][5229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-sgrfm" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--sgrfm-eth0" Apr 16 23:31:08.389744 containerd[1891]: time="2026-04-16T23:31:08.389625390Z" level=info msg="Container 0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:08.436637 systemd-networkd[1480]: calia33d66976e8: Link UP Apr 16 23:31:08.438853 systemd-networkd[1480]: calia33d66976e8: Gained carrier Apr 16 23:31:08.442522 containerd[1891]: time="2026-04-16T23:31:08.442430735Z" level=info msg="CreateContainer within sandbox \"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29\"" Apr 16 23:31:08.444619 containerd[1891]: time="2026-04-16T23:31:08.444289277Z" level=info msg="StartContainer for \"0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29\"" Apr 16 23:31:08.453403 containerd[1891]: time="2026-04-16T23:31:08.453310886Z" level=info msg="connecting to shim 0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29" address="unix:///run/containerd/s/8e1f1550696323fb448c4057b0f3761710c1fe5c532cd996706446f6d035c312" protocol=ttrpc version=3 Apr 16 23:31:08.464037 containerd[1891]: time="2026-04-16T23:31:08.463997431Z" level=info msg="connecting to shim c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1" address="unix:///run/containerd/s/803c290a9406f159d1d82afc8f9e9684bcec3f8d3abe3a0f014d01e7ad6b453c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.209 [INFO][5233] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0 calico-kube-controllers-746789d58c- calico-system 46758d21-42a0-427d-9d3c-b1598016f3d7 827 0 2026-04-16 23:30:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:746789d58c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb calico-kube-controllers-746789d58c-h6p6n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia33d66976e8 [] [] }} ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.209 [INFO][5233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.274 [INFO][5261] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" HandleID="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.288 [INFO][5261] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" HandleID="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"calico-kube-controllers-746789d58c-h6p6n", "timestamp":"2026-04-16 23:31:08.27472022 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001862c0)} Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.289 [INFO][5261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.318 [INFO][5261] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.361 [INFO][5261] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.372 [INFO][5261] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.392 [INFO][5261] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.397 [INFO][5261] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.402 [INFO][5261] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.402 [INFO][5261] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.407 [INFO][5261] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1 Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.413 [INFO][5261] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.424 [INFO][5261] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.198/26] block=192.168.57.192/26 handle="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.425 [INFO][5261] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.198/26] handle="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.425 [INFO][5261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:08.476525 containerd[1891]: 2026-04-16 23:31:08.425 [INFO][5261] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.198/26] IPv6=[] ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" HandleID="k8s-pod-network.ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.428 [INFO][5233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0", GenerateName:"calico-kube-controllers-746789d58c-", Namespace:"calico-system", SelfLink:"", UID:"46758d21-42a0-427d-9d3c-b1598016f3d7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746789d58c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"calico-kube-controllers-746789d58c-h6p6n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia33d66976e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.432 [INFO][5233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.198/32] ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.432 [INFO][5233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia33d66976e8 ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.440 [INFO][5233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.441 [INFO][5233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0", GenerateName:"calico-kube-controllers-746789d58c-", Namespace:"calico-system", SelfLink:"", UID:"46758d21-42a0-427d-9d3c-b1598016f3d7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746789d58c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1", Pod:"calico-kube-controllers-746789d58c-h6p6n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia33d66976e8", MAC:"86:62:1e:9d:ec:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:08.477225 containerd[1891]: 2026-04-16 23:31:08.466 [INFO][5233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" Namespace="calico-system" Pod="calico-kube-controllers-746789d58c-h6p6n" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--kube--controllers--746789d58c--h6p6n-eth0" Apr 16 23:31:08.490670 systemd[1]: Started cri-containerd-c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1.scope - libcontainer container c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1. Apr 16 23:31:08.494557 systemd[1]: Started cri-containerd-0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29.scope - libcontainer container 0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29. Apr 16 23:31:08.549807 containerd[1891]: time="2026-04-16T23:31:08.549759084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-sgrfm,Uid:919844db-7c0f-41a2-96b8-90a0eed8a30f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1\"" Apr 16 23:31:08.556578 containerd[1891]: time="2026-04-16T23:31:08.556512476Z" level=info msg="connecting to shim ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1" address="unix:///run/containerd/s/e68ef356e7c97ddb68ed87552839b9d54467c0e798920094529c0a8e56086705" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:08.577666 containerd[1891]: time="2026-04-16T23:31:08.577624641Z" level=info msg="StartContainer for \"0678d50a67b7f649d7fef616772e26014ca1defc4ce13d0fa924336ee3273a29\" returns successfully" Apr 16 23:31:08.585658 systemd-networkd[1480]: califd02911aa3a: Gained IPv6LL Apr 16 23:31:08.592660 systemd[1]: Started cri-containerd-ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1.scope - libcontainer container ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1. Apr 16 23:31:08.628992 containerd[1891]: time="2026-04-16T23:31:08.628910413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746789d58c-h6p6n,Uid:46758d21-42a0-427d-9d3c-b1598016f3d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1\"" Apr 16 23:31:09.129823 containerd[1891]: time="2026-04-16T23:31:09.129771093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7tlgn,Uid:2aa4ae73-6822-4401-8a34-bdea0014c865,Namespace:kube-system,Attempt:0,}" Apr 16 23:31:09.243080 systemd-networkd[1480]: calia8e72b23b63: Link UP Apr 16 23:31:09.243893 systemd-networkd[1480]: calia8e72b23b63: Gained carrier Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.174 [INFO][5434] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0 coredns-66bc5c9577- kube-system 2aa4ae73-6822-4401-8a34-bdea0014c865 829 0 2026-04-16 23:30:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb coredns-66bc5c9577-7tlgn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia8e72b23b63 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.176 [INFO][5434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.196 [INFO][5446] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" HandleID="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.203 [INFO][5446] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" HandleID="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"coredns-66bc5c9577-7tlgn", "timestamp":"2026-04-16 23:31:09.196479712 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000268f20)} Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.203 [INFO][5446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.204 [INFO][5446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.204 [INFO][5446] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.206 [INFO][5446] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.211 [INFO][5446] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.216 [INFO][5446] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.217 [INFO][5446] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.220 [INFO][5446] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.220 [INFO][5446] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.221 [INFO][5446] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.227 [INFO][5446] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.238 [INFO][5446] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.199/26] block=192.168.57.192/26 handle="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.238 [INFO][5446] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.199/26] handle="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.238 [INFO][5446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:09.260215 containerd[1891]: 2026-04-16 23:31:09.238 [INFO][5446] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.199/26] IPv6=[] ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" HandleID="k8s-pod-network.3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Workload="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.262138 containerd[1891]: 2026-04-16 23:31:09.240 [INFO][5434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2aa4ae73-6822-4401-8a34-bdea0014c865", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"coredns-66bc5c9577-7tlgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8e72b23b63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:09.262138 containerd[1891]: 2026-04-16 23:31:09.240 [INFO][5434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.199/32] ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.262138 containerd[1891]: 2026-04-16 23:31:09.240 [INFO][5434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8e72b23b63 ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.262138 containerd[1891]: 2026-04-16 23:31:09.244 [INFO][5434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.262138 containerd[1891]: 2026-04-16 23:31:09.244 [INFO][5434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2aa4ae73-6822-4401-8a34-bdea0014c865", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa", Pod:"coredns-66bc5c9577-7tlgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8e72b23b63", MAC:"86:1e:a8:f3:38:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:09.262721 containerd[1891]: 2026-04-16 23:31:09.256 [INFO][5434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" Namespace="kube-system" Pod="coredns-66bc5c9577-7tlgn" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-coredns--66bc5c9577--7tlgn-eth0" Apr 16 23:31:09.289682 systemd-networkd[1480]: cali7ddda6a7e78: Gained IPv6LL Apr 16 23:31:09.316444 containerd[1891]: time="2026-04-16T23:31:09.316371262Z" level=info msg="connecting to shim 3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa" address="unix:///run/containerd/s/65dd7650970d809afa7fbdec69fcf5fc275c6c0aa0efcf26eb5f2bbdd6c2022e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:09.339644 systemd[1]: Started cri-containerd-3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa.scope - libcontainer container 3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa. Apr 16 23:31:09.380592 containerd[1891]: time="2026-04-16T23:31:09.380458335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7tlgn,Uid:2aa4ae73-6822-4401-8a34-bdea0014c865,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa\"" Apr 16 23:31:09.392168 containerd[1891]: time="2026-04-16T23:31:09.392130130Z" level=info msg="CreateContainer within sandbox \"3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:31:09.431219 containerd[1891]: time="2026-04-16T23:31:09.431166829Z" level=info msg="Container ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:09.472229 containerd[1891]: time="2026-04-16T23:31:09.472171064Z" level=info msg="CreateContainer within sandbox \"3cb684a20e255c02654d599713c8495878086ad205937ecb633e2b877810bdaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536\"" Apr 16 23:31:09.473124 containerd[1891]: time="2026-04-16T23:31:09.473097399Z" level=info msg="StartContainer for \"ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536\"" Apr 16 23:31:09.474177 containerd[1891]: time="2026-04-16T23:31:09.474130465Z" level=info msg="connecting to shim ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536" address="unix:///run/containerd/s/65dd7650970d809afa7fbdec69fcf5fc275c6c0aa0efcf26eb5f2bbdd6c2022e" protocol=ttrpc version=3 Apr 16 23:31:09.489644 systemd[1]: Started cri-containerd-ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536.scope - libcontainer container ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536. Apr 16 23:31:09.539475 containerd[1891]: time="2026-04-16T23:31:09.538665070Z" level=info msg="StartContainer for \"ae38eec406f2eaa707492a1d5aa7ab18bb93d2784a4075f7aa36ae11dc944536\" returns successfully" Apr 16 23:31:09.801968 systemd-networkd[1480]: calidf545d893c2: Gained IPv6LL Apr 16 23:31:10.122251 systemd-networkd[1480]: calia33d66976e8: Gained IPv6LL Apr 16 23:31:10.131411 containerd[1891]: time="2026-04-16T23:31:10.131364618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-rs7rf,Uid:210df17c-95d1-4cbe-97d2-f101c7dc8650,Namespace:calico-system,Attempt:0,}" Apr 16 23:31:10.249866 systemd-networkd[1480]: calia8e72b23b63: Gained IPv6LL Apr 16 23:31:10.251223 systemd-networkd[1480]: calib5d9d87a215: Link UP Apr 16 23:31:10.251318 systemd-networkd[1480]: calib5d9d87a215: Gained carrier Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.178 [INFO][5568] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0 calico-apiserver-7d7bf747c9- calico-system 210df17c-95d1-4cbe-97d2-f101c7dc8650 826 0 2026-04-16 23:30:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7bf747c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-b3358a4beb calico-apiserver-7d7bf747c9-rs7rf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib5d9d87a215 [] [] }} ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.178 [INFO][5568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.203 [INFO][5580] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" HandleID="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.211 [INFO][5580] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" HandleID="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-b3358a4beb", "pod":"calico-apiserver-7d7bf747c9-rs7rf", "timestamp":"2026-04-16 23:31:10.203738026 +0000 UTC"}, Hostname:"ci-4459.2.4-n-b3358a4beb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030f080)} Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.211 [INFO][5580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.211 [INFO][5580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.211 [INFO][5580] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-b3358a4beb' Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.213 [INFO][5580] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.217 [INFO][5580] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.221 [INFO][5580] ipam/ipam.go 526: Trying affinity for 192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.223 [INFO][5580] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.225 [INFO][5580] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.192/26 host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.225 [INFO][5580] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.192/26 handle="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.227 [INFO][5580] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8 Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.238 [INFO][5580] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.192/26 handle="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.247 [INFO][5580] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.200/26] block=192.168.57.192/26 handle="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.247 [INFO][5580] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.200/26] handle="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" host="ci-4459.2.4-n-b3358a4beb" Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.247 [INFO][5580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:31:10.267378 containerd[1891]: 2026-04-16 23:31:10.247 [INFO][5580] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.200/26] IPv6=[] ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" HandleID="k8s-pod-network.1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Workload="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.249 [INFO][5568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0", GenerateName:"calico-apiserver-7d7bf747c9-", Namespace:"calico-system", SelfLink:"", UID:"210df17c-95d1-4cbe-97d2-f101c7dc8650", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7bf747c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"", Pod:"calico-apiserver-7d7bf747c9-rs7rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5d9d87a215", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.249 [INFO][5568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.200/32] ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.249 [INFO][5568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5d9d87a215 ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.251 [INFO][5568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.251 [INFO][5568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0", GenerateName:"calico-apiserver-7d7bf747c9-", Namespace:"calico-system", SelfLink:"", UID:"210df17c-95d1-4cbe-97d2-f101c7dc8650", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7bf747c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-b3358a4beb", ContainerID:"1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8", Pod:"calico-apiserver-7d7bf747c9-rs7rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5d9d87a215", MAC:"f2:2e:d7:11:63:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:31:10.268057 containerd[1891]: 2026-04-16 23:31:10.265 [INFO][5568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" Namespace="calico-system" Pod="calico-apiserver-7d7bf747c9-rs7rf" WorkloadEndpoint="ci--4459.2.4--n--b3358a4beb-k8s-calico--apiserver--7d7bf747c9--rs7rf-eth0" Apr 16 23:31:10.332821 kubelet[3413]: I0416 23:31:10.332333 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7tlgn" podStartSLOduration=46.3323158 podStartE2EDuration="46.3323158s" podCreationTimestamp="2026-04-16 23:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:31:10.331115274 +0000 UTC m=+53.314598543" watchObservedRunningTime="2026-04-16 23:31:10.3323158 +0000 UTC m=+53.315799061" Apr 16 23:31:10.512367 containerd[1891]: time="2026-04-16T23:31:10.511894026Z" level=info msg="connecting to shim 1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8" address="unix:///run/containerd/s/faf864151294f8c3e506534f8d05608589e146bbb0adda528f114835bde5eec4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:31:10.531664 systemd[1]: Started cri-containerd-1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8.scope - libcontainer container 1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8. Apr 16 23:31:10.589165 containerd[1891]: time="2026-04-16T23:31:10.589056832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7bf747c9-rs7rf,Uid:210df17c-95d1-4cbe-97d2-f101c7dc8650,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8\"" Apr 16 23:31:10.754856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564967238.mount: Deactivated successfully. Apr 16 23:31:11.102563 containerd[1891]: time="2026-04-16T23:31:11.102516420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:11.106870 containerd[1891]: time="2026-04-16T23:31:11.106701817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 23:31:11.112509 containerd[1891]: time="2026-04-16T23:31:11.112462715Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:11.117468 containerd[1891]: time="2026-04-16T23:31:11.117415643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:11.118089 containerd[1891]: time="2026-04-16T23:31:11.117789820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.7742606s" Apr 16 23:31:11.118089 containerd[1891]: time="2026-04-16T23:31:11.117816740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 23:31:11.119267 containerd[1891]: time="2026-04-16T23:31:11.119237318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:31:11.128513 containerd[1891]: time="2026-04-16T23:31:11.127834893Z" level=info msg="CreateContainer within sandbox \"14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:31:11.149982 containerd[1891]: time="2026-04-16T23:31:11.149940320Z" level=info msg="Container da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:11.172602 containerd[1891]: time="2026-04-16T23:31:11.172563048Z" level=info msg="CreateContainer within sandbox \"14f92494c0551fea7d77ea1e078bf94140409ac161bf14016294c08dd4d875b5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4\"" Apr 16 23:31:11.173831 containerd[1891]: time="2026-04-16T23:31:11.173800206Z" level=info msg="StartContainer for \"da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4\"" Apr 16 23:31:11.175007 containerd[1891]: time="2026-04-16T23:31:11.174975146Z" level=info msg="connecting to shim da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4" address="unix:///run/containerd/s/35cd50242a694978d19c34446452e2d1c3bc80b7969bde7996dd87914a918fe1" protocol=ttrpc version=3 Apr 16 23:31:11.193639 systemd[1]: Started cri-containerd-da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4.scope - libcontainer container da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4. Apr 16 23:31:11.230367 containerd[1891]: time="2026-04-16T23:31:11.230270027Z" level=info msg="StartContainer for \"da228579a473c19b931f50e1f415087864670369186c8b164c87bfc04a3a8ad4\" returns successfully" Apr 16 23:31:11.339837 kubelet[3413]: I0416 23:31:11.339671 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-bd296" podStartSLOduration=32.669865098 podStartE2EDuration="36.339656808s" podCreationTimestamp="2026-04-16 23:30:35 +0000 UTC" firstStartedPulling="2026-04-16 23:31:07.448769152 +0000 UTC m=+50.432252421" lastFinishedPulling="2026-04-16 23:31:11.11856087 +0000 UTC m=+54.102044131" observedRunningTime="2026-04-16 23:31:11.339568678 +0000 UTC m=+54.323051939" watchObservedRunningTime="2026-04-16 23:31:11.339656808 +0000 UTC m=+54.323140109" Apr 16 23:31:12.233983 systemd-networkd[1480]: calib5d9d87a215: Gained IPv6LL Apr 16 23:31:14.144105 containerd[1891]: time="2026-04-16T23:31:14.143528244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:14.147453 containerd[1891]: time="2026-04-16T23:31:14.147425113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 23:31:14.157013 containerd[1891]: time="2026-04-16T23:31:14.156977447Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:14.164028 containerd[1891]: time="2026-04-16T23:31:14.163893341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:14.164358 containerd[1891]: time="2026-04-16T23:31:14.164328184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.045057009s" Apr 16 23:31:14.164358 containerd[1891]: time="2026-04-16T23:31:14.164355968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:31:14.168653 containerd[1891]: time="2026-04-16T23:31:14.168511732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:31:14.185403 containerd[1891]: time="2026-04-16T23:31:14.184518205Z" level=info msg="CreateContainer within sandbox \"c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:31:14.220570 containerd[1891]: time="2026-04-16T23:31:14.220518774Z" level=info msg="Container 9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:14.224529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915474199.mount: Deactivated successfully. Apr 16 23:31:14.245407 containerd[1891]: time="2026-04-16T23:31:14.245363491Z" level=info msg="CreateContainer within sandbox \"c7840a0cea7f7dc7c3e6544f505aa00290e4a6b1ef651ec21d9ce02f98c06ed1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6\"" Apr 16 23:31:14.246859 containerd[1891]: time="2026-04-16T23:31:14.246746228Z" level=info msg="StartContainer for \"9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6\"" Apr 16 23:31:14.247901 containerd[1891]: time="2026-04-16T23:31:14.247874128Z" level=info msg="connecting to shim 9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6" address="unix:///run/containerd/s/803c290a9406f159d1d82afc8f9e9684bcec3f8d3abe3a0f014d01e7ad6b453c" protocol=ttrpc version=3 Apr 16 23:31:14.269647 systemd[1]: Started cri-containerd-9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6.scope - libcontainer container 9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6. Apr 16 23:31:14.304588 containerd[1891]: time="2026-04-16T23:31:14.304475536Z" level=info msg="StartContainer for \"9f0d3b6fba8d93f22285f62cf5b441734cb28a7ab5b6e8f7abbf578a952723b6\" returns successfully" Apr 16 23:31:15.337639 kubelet[3413]: I0416 23:31:15.337602 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:31:15.964131 containerd[1891]: time="2026-04-16T23:31:15.964081046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:15.970879 containerd[1891]: time="2026-04-16T23:31:15.970846289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 23:31:15.974307 containerd[1891]: time="2026-04-16T23:31:15.974273867Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:15.979166 containerd[1891]: time="2026-04-16T23:31:15.979134848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:15.979624 containerd[1891]: time="2026-04-16T23:31:15.979599907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.811056094s" Apr 16 23:31:15.979653 containerd[1891]: time="2026-04-16T23:31:15.979627772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 23:31:15.980738 containerd[1891]: time="2026-04-16T23:31:15.980682357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:31:15.987866 containerd[1891]: time="2026-04-16T23:31:15.987837737Z" level=info msg="CreateContainer within sandbox \"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:31:16.016502 containerd[1891]: time="2026-04-16T23:31:16.015629373Z" level=info msg="Container de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:16.035045 containerd[1891]: time="2026-04-16T23:31:16.035008039Z" level=info msg="CreateContainer within sandbox \"60765ea405c3d1b27855f794532f433960cc94c396554ce700cba4b1203f849f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396\"" Apr 16 23:31:16.035748 containerd[1891]: time="2026-04-16T23:31:16.035682103Z" level=info msg="StartContainer for \"de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396\"" Apr 16 23:31:16.036834 containerd[1891]: time="2026-04-16T23:31:16.036810810Z" level=info msg="connecting to shim de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396" address="unix:///run/containerd/s/8e1f1550696323fb448c4057b0f3761710c1fe5c532cd996706446f6d035c312" protocol=ttrpc version=3 Apr 16 23:31:16.057632 systemd[1]: Started cri-containerd-de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396.scope - libcontainer container de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396. Apr 16 23:31:16.110545 containerd[1891]: time="2026-04-16T23:31:16.110509813Z" level=info msg="StartContainer for \"de3f821032cac4b6d8b1b82e13d561d423f9d39d3f4d57833da93f34e7bee396\" returns successfully" Apr 16 23:31:16.358322 kubelet[3413]: I0416 23:31:16.358223 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kvtzd" podStartSLOduration=31.759109278 podStartE2EDuration="41.358207799s" podCreationTimestamp="2026-04-16 23:30:35 +0000 UTC" firstStartedPulling="2026-04-16 23:31:06.381335438 +0000 UTC m=+49.364818707" lastFinishedPulling="2026-04-16 23:31:15.980433959 +0000 UTC m=+58.963917228" observedRunningTime="2026-04-16 23:31:16.356950921 +0000 UTC m=+59.340434182" watchObservedRunningTime="2026-04-16 23:31:16.358207799 +0000 UTC m=+59.341691060" Apr 16 23:31:16.359634 kubelet[3413]: I0416 23:31:16.359588 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d7bf747c9-sgrfm" podStartSLOduration=36.745309453 podStartE2EDuration="42.359555991s" podCreationTimestamp="2026-04-16 23:30:34 +0000 UTC" firstStartedPulling="2026-04-16 23:31:08.552457887 +0000 UTC m=+51.535941156" lastFinishedPulling="2026-04-16 23:31:14.166704425 +0000 UTC m=+57.150187694" observedRunningTime="2026-04-16 23:31:14.347948693 +0000 UTC m=+57.331431954" watchObservedRunningTime="2026-04-16 23:31:16.359555991 +0000 UTC m=+59.343039260" Apr 16 23:31:16.367516 kubelet[3413]: I0416 23:31:16.366915 3413 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:31:16.367516 kubelet[3413]: I0416 23:31:16.366944 3413 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:31:18.752423 containerd[1891]: time="2026-04-16T23:31:18.752372562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:18.755790 containerd[1891]: time="2026-04-16T23:31:18.755760617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 23:31:18.762191 containerd[1891]: time="2026-04-16T23:31:18.762161917Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:18.768794 containerd[1891]: time="2026-04-16T23:31:18.768737646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:18.769228 containerd[1891]: time="2026-04-16T23:31:18.769114743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.788411449s" Apr 16 23:31:18.769228 containerd[1891]: time="2026-04-16T23:31:18.769142496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 23:31:18.771024 containerd[1891]: time="2026-04-16T23:31:18.770558540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:31:18.795112 containerd[1891]: time="2026-04-16T23:31:18.795080240Z" level=info msg="CreateContainer within sandbox \"ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:31:18.816734 containerd[1891]: time="2026-04-16T23:31:18.816695890Z" level=info msg="Container f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:18.821336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510096009.mount: Deactivated successfully. Apr 16 23:31:18.837185 containerd[1891]: time="2026-04-16T23:31:18.837111069Z" level=info msg="CreateContainer within sandbox \"ce046c555138a44875b2914ffc9139db670d2d6615391d0aac8f7878baacfde1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7\"" Apr 16 23:31:18.838301 containerd[1891]: time="2026-04-16T23:31:18.837819111Z" level=info msg="StartContainer for \"f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7\"" Apr 16 23:31:18.839219 containerd[1891]: time="2026-04-16T23:31:18.839130704Z" level=info msg="connecting to shim f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7" address="unix:///run/containerd/s/e68ef356e7c97ddb68ed87552839b9d54467c0e798920094529c0a8e56086705" protocol=ttrpc version=3 Apr 16 23:31:18.874614 systemd[1]: Started cri-containerd-f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7.scope - libcontainer container f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7. Apr 16 23:31:18.912788 containerd[1891]: time="2026-04-16T23:31:18.912747678Z" level=info msg="StartContainer for \"f6dd6999ec5a97764e86859221be57fe5af522ca04e246be6ed358ea85c668c7\" returns successfully" Apr 16 23:31:19.200810 containerd[1891]: time="2026-04-16T23:31:19.200762525Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:19.204793 containerd[1891]: time="2026-04-16T23:31:19.204763700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:31:19.206025 containerd[1891]: time="2026-04-16T23:31:19.206003180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 435.422774ms" Apr 16 23:31:19.206068 containerd[1891]: time="2026-04-16T23:31:19.206030364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:31:19.215130 containerd[1891]: time="2026-04-16T23:31:19.214843414Z" level=info msg="CreateContainer within sandbox \"1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:31:19.241276 containerd[1891]: time="2026-04-16T23:31:19.241241730Z" level=info msg="Container c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:19.271239 containerd[1891]: time="2026-04-16T23:31:19.271201025Z" level=info msg="CreateContainer within sandbox \"1bbc07b7279c454234a1b51f6f136e2f2976374ebd5a4b8e8c94de2702791cb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635\"" Apr 16 23:31:19.272001 containerd[1891]: time="2026-04-16T23:31:19.271982077Z" level=info msg="StartContainer for \"c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635\"" Apr 16 23:31:19.272807 containerd[1891]: time="2026-04-16T23:31:19.272784554Z" level=info msg="connecting to shim c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635" address="unix:///run/containerd/s/faf864151294f8c3e506534f8d05608589e146bbb0adda528f114835bde5eec4" protocol=ttrpc version=3 Apr 16 23:31:19.290623 systemd[1]: Started cri-containerd-c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635.scope - libcontainer container c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635. Apr 16 23:31:19.324537 containerd[1891]: time="2026-04-16T23:31:19.324476502Z" level=info msg="StartContainer for \"c08ad0629196abc992c90f9c5b3ff174ef6d94081120a7c92a1d280002525635\" returns successfully" Apr 16 23:31:19.386834 kubelet[3413]: I0416 23:31:19.386772 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d7bf747c9-rs7rf" podStartSLOduration=36.770709066 podStartE2EDuration="45.386755585s" podCreationTimestamp="2026-04-16 23:30:34 +0000 UTC" firstStartedPulling="2026-04-16 23:31:10.590757873 +0000 UTC m=+53.574241142" lastFinishedPulling="2026-04-16 23:31:19.2068044 +0000 UTC m=+62.190287661" observedRunningTime="2026-04-16 23:31:19.369049531 +0000 UTC m=+62.352532792" watchObservedRunningTime="2026-04-16 23:31:19.386755585 +0000 UTC m=+62.370238854" Apr 16 23:31:19.414533 kubelet[3413]: I0416 23:31:19.414266 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-746789d58c-h6p6n" podStartSLOduration=34.274576291 podStartE2EDuration="44.414249721s" podCreationTimestamp="2026-04-16 23:30:35 +0000 UTC" firstStartedPulling="2026-04-16 23:31:08.630258782 +0000 UTC m=+51.613742051" lastFinishedPulling="2026-04-16 23:31:18.76993222 +0000 UTC m=+61.753415481" observedRunningTime="2026-04-16 23:31:19.387453979 +0000 UTC m=+62.370937264" watchObservedRunningTime="2026-04-16 23:31:19.414249721 +0000 UTC m=+62.397732982" Apr 16 23:31:41.172821 update_engine[1873]: I20260416 23:31:41.172758 1873 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 23:31:41.172821 update_engine[1873]: I20260416 23:31:41.172811 1873 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 23:31:41.173232 update_engine[1873]: I20260416 23:31:41.172996 1873 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 23:31:41.175284 update_engine[1873]: I20260416 23:31:41.175261 1873 omaha_request_params.cc:62] Current group set to stable Apr 16 23:31:41.175901 update_engine[1873]: I20260416 23:31:41.175742 1873 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 23:31:41.175901 update_engine[1873]: I20260416 23:31:41.175758 1873 update_attempter.cc:643] Scheduling an action processor start. Apr 16 23:31:41.175901 update_engine[1873]: I20260416 23:31:41.175778 1873 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:31:41.178800 update_engine[1873]: I20260416 23:31:41.177477 1873 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 23:31:41.178800 update_engine[1873]: I20260416 23:31:41.177573 1873 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:31:41.178800 update_engine[1873]: I20260416 23:31:41.177579 1873 omaha_request_action.cc:272] Request: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: Apr 16 23:31:41.178800 update_engine[1873]: I20260416 23:31:41.177584 1873 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:31:41.179010 update_engine[1873]: I20260416 23:31:41.178992 1873 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:31:41.181544 update_engine[1873]: I20260416 23:31:41.179770 1873 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:31:41.182481 locksmithd[1957]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 23:31:41.284397 update_engine[1873]: E20260416 23:31:41.284337 1873 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:31:41.284535 update_engine[1873]: I20260416 23:31:41.284441 1873 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 23:31:50.586704 kubelet[3413]: I0416 23:31:50.586654 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:31:51.164404 update_engine[1873]: I20260416 23:31:51.164332 1873 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:31:51.164757 update_engine[1873]: I20260416 23:31:51.164427 1873 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:31:51.164824 update_engine[1873]: I20260416 23:31:51.164789 1873 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:31:51.273187 update_engine[1873]: E20260416 23:31:51.273118 1873 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:31:51.273323 update_engine[1873]: I20260416 23:31:51.273252 1873 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 23:32:01.164616 update_engine[1873]: I20260416 23:32:01.164541 1873 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:32:01.165690 update_engine[1873]: I20260416 23:32:01.164638 1873 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:32:01.165690 update_engine[1873]: I20260416 23:32:01.164969 1873 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:32:01.217162 update_engine[1873]: E20260416 23:32:01.217098 1873 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:32:01.217312 update_engine[1873]: I20260416 23:32:01.217217 1873 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 23:32:11.163741 update_engine[1873]: I20260416 23:32:11.163667 1873 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:32:11.164079 update_engine[1873]: I20260416 23:32:11.163763 1873 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:32:11.164157 update_engine[1873]: I20260416 23:32:11.164126 1873 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:32:11.175125 update_engine[1873]: E20260416 23:32:11.175079 1873 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:32:11.175208 update_engine[1873]: I20260416 23:32:11.175180 1873 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:32:11.175208 update_engine[1873]: I20260416 23:32:11.175187 1873 omaha_request_action.cc:617] Omaha request response: Apr 16 23:32:11.175292 update_engine[1873]: E20260416 23:32:11.175273 1873 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 23:32:11.175311 update_engine[1873]: I20260416 23:32:11.175294 1873 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 23:32:11.175311 update_engine[1873]: I20260416 23:32:11.175298 1873 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:32:11.175311 update_engine[1873]: I20260416 23:32:11.175302 1873 update_attempter.cc:306] Processing Done. Apr 16 23:32:11.175359 update_engine[1873]: E20260416 23:32:11.175312 1873 update_attempter.cc:619] Update failed. Apr 16 23:32:11.175359 update_engine[1873]: I20260416 23:32:11.175316 1873 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 23:32:11.175359 update_engine[1873]: I20260416 23:32:11.175319 1873 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 23:32:11.175359 update_engine[1873]: I20260416 23:32:11.175323 1873 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 23:32:11.175627 update_engine[1873]: I20260416 23:32:11.175383 1873 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:32:11.175627 update_engine[1873]: I20260416 23:32:11.175400 1873 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:32:11.175627 update_engine[1873]: I20260416 23:32:11.175403 1873 omaha_request_action.cc:272] Request: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: Apr 16 23:32:11.175627 update_engine[1873]: I20260416 23:32:11.175408 1873 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:32:11.175627 update_engine[1873]: I20260416 23:32:11.175424 1873 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:32:11.175774 update_engine[1873]: I20260416 23:32:11.175729 1873 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:32:11.175879 locksmithd[1957]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 23:32:11.179962 update_engine[1873]: E20260416 23:32:11.179929 1873 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:32:11.180007 update_engine[1873]: I20260416 23:32:11.179998 1873 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180006 1873 omaha_request_action.cc:617] Omaha request response: Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180011 1873 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180013 1873 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180016 1873 update_attempter.cc:306] Processing Done. Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180020 1873 update_attempter.cc:310] Error event sent. Apr 16 23:32:11.180040 update_engine[1873]: I20260416 23:32:11.180027 1873 update_check_scheduler.cc:74] Next update check in 44m7s Apr 16 23:32:11.180430 locksmithd[1957]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0