Apr 23 23:15:19.082093 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Apr 23 23:15:19.082111 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 23 21:57:58 -00 2026 Apr 23 23:15:19.082118 kernel: KASLR enabled Apr 23 23:15:19.082122 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 23 23:15:19.082125 kernel: printk: legacy bootconsole [pl11] enabled Apr 23 23:15:19.082130 kernel: efi: EFI v2.7 by EDK II Apr 23 23:15:19.082135 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Apr 23 23:15:19.082139 kernel: random: crng init done Apr 23 23:15:19.082143 kernel: secureboot: Secure boot disabled Apr 23 23:15:19.082147 kernel: ACPI: Early table checksum verification disabled Apr 23 23:15:19.082151 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Apr 23 23:15:19.082154 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082158 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082162 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 23 23:15:19.082168 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082173 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082177 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082181 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082185 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082191 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082195 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 23 23:15:19.082199 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 23 23:15:19.082203 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 23 23:15:19.082207 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 23 23:15:19.082211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 23 23:15:19.082215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Apr 23 23:15:19.082219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Apr 23 23:15:19.082223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 23 23:15:19.082228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 23 23:15:19.082232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 23 23:15:19.082237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 23 23:15:19.082241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 23 23:15:19.082245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 23 23:15:19.082249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 23 23:15:19.082253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 23 23:15:19.082257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 23 23:15:19.082261 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Apr 23 23:15:19.082265 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Apr 23 23:15:19.082269 kernel: Zone ranges: Apr 23 23:15:19.082274 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 23 23:15:19.082281 kernel: DMA32 empty Apr 23 23:15:19.082285 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 23 23:15:19.082289 kernel: Device empty Apr 23 23:15:19.082294 kernel: Movable zone start for each node Apr 23 23:15:19.082298 kernel: Early memory node ranges Apr 23 23:15:19.082302 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 23 23:15:19.082307 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Apr 23 23:15:19.082312 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Apr 23 23:15:19.082316 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Apr 23 23:15:19.082321 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Apr 23 23:15:19.082325 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Apr 23 23:15:19.082329 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 23 23:15:19.082334 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 23 23:15:19.082338 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 23 23:15:19.082342 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Apr 23 23:15:19.082347 kernel: psci: probing for conduit method from ACPI. Apr 23 23:15:19.082351 kernel: psci: PSCIv1.3 detected in firmware. Apr 23 23:15:19.082355 kernel: psci: Using standard PSCI v0.2 function IDs Apr 23 23:15:19.082360 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 23 23:15:19.082365 kernel: psci: SMC Calling Convention v1.4 Apr 23 23:15:19.082369 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 23 23:15:19.082374 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 23 23:15:19.082378 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 23 23:15:19.082382 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 23 23:15:19.082387 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 23 23:15:19.082391 kernel: Detected PIPT I-cache on CPU0 Apr 23 23:15:19.082395 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Apr 23 23:15:19.082400 kernel: CPU features: detected: GIC system register CPU interface Apr 23 23:15:19.082404 kernel: CPU features: detected: Spectre-v4 Apr 23 23:15:19.082408 kernel: CPU features: detected: Spectre-BHB Apr 23 23:15:19.082414 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 23 23:15:19.082418 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 23 23:15:19.082422 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Apr 23 23:15:19.082427 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 23 23:15:19.082431 kernel: alternatives: applying boot alternatives Apr 23 23:15:19.082436 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:15:19.082441 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 23 23:15:19.082445 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 23 23:15:19.082450 kernel: Fallback order for Node 0: 0 Apr 23 23:15:19.082454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Apr 23 23:15:19.082459 kernel: Policy zone: Normal Apr 23 23:15:19.082464 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 23 23:15:19.082468 kernel: software IO TLB: area num 2. Apr 23 23:15:19.082472 kernel: software IO TLB: mapped [mem 0x00000000358f0000-0x00000000398f0000] (64MB) Apr 23 23:15:19.082477 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 23 23:15:19.082481 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 23 23:15:19.082486 kernel: rcu: RCU event tracing is enabled. Apr 23 23:15:19.082491 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 23 23:15:19.082495 kernel: Trampoline variant of Tasks RCU enabled. Apr 23 23:15:19.082499 kernel: Tracing variant of Tasks RCU enabled. Apr 23 23:15:19.082504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 23 23:15:19.082508 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 23 23:15:19.082513 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:15:19.082518 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:15:19.082522 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 23 23:15:19.082527 kernel: GICv3: 960 SPIs implemented Apr 23 23:15:19.082531 kernel: GICv3: 0 Extended SPIs implemented Apr 23 23:15:19.082536 kernel: Root IRQ handler: gic_handle_irq Apr 23 23:15:19.082540 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 23 23:15:19.082544 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Apr 23 23:15:19.082549 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 23 23:15:19.082553 kernel: ITS: No ITS available, not enabling LPIs Apr 23 23:15:19.082557 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 23 23:15:19.082563 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Apr 23 23:15:19.082567 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 23 23:15:19.082572 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Apr 23 23:15:19.082576 kernel: Console: colour dummy device 80x25 Apr 23 23:15:19.082581 kernel: printk: legacy console [tty1] enabled Apr 23 23:15:19.082586 kernel: ACPI: Core revision 20240827 Apr 23 23:15:19.082590 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Apr 23 23:15:19.082595 kernel: pid_max: default: 32768 minimum: 301 Apr 23 23:15:19.082599 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 23 23:15:19.082604 kernel: landlock: Up and running. Apr 23 23:15:19.082609 kernel: SELinux: Initializing. Apr 23 23:15:19.082614 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:15:19.082618 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:15:19.082623 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Apr 23 23:15:19.082627 kernel: Hyper-V: Host Build 10.0.26102.1283-1-0 Apr 23 23:15:19.082635 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 23 23:15:19.082641 kernel: rcu: Hierarchical SRCU implementation. Apr 23 23:15:19.082645 kernel: rcu: Max phase no-delay instances is 400. Apr 23 23:15:19.082650 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 23 23:15:19.082655 kernel: Remapping and enabling EFI services. Apr 23 23:15:19.082660 kernel: smp: Bringing up secondary CPUs ... Apr 23 23:15:19.082664 kernel: Detected PIPT I-cache on CPU1 Apr 23 23:15:19.082670 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 23 23:15:19.082675 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Apr 23 23:15:19.082679 kernel: smp: Brought up 1 node, 2 CPUs Apr 23 23:15:19.082684 kernel: SMP: Total of 2 processors activated. Apr 23 23:15:19.082689 kernel: CPU: All CPU(s) started at EL1 Apr 23 23:15:19.082694 kernel: CPU features: detected: 32-bit EL0 Support Apr 23 23:15:19.082699 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 23 23:15:19.084741 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 23 23:15:19.084768 kernel: CPU features: detected: Common not Private translations Apr 23 23:15:19.084774 kernel: CPU features: detected: CRC32 instructions Apr 23 23:15:19.084780 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Apr 23 23:15:19.084785 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 23 23:15:19.084790 kernel: CPU features: detected: LSE atomic instructions Apr 23 23:15:19.084794 kernel: CPU features: detected: Privileged Access Never Apr 23 23:15:19.084804 kernel: CPU features: detected: Speculation barrier (SB) Apr 23 23:15:19.084809 kernel: CPU features: detected: TLB range maintenance instructions Apr 23 23:15:19.084814 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 23 23:15:19.084819 kernel: CPU features: detected: Scalable Vector Extension Apr 23 23:15:19.084824 kernel: alternatives: applying system-wide alternatives Apr 23 23:15:19.084828 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 23 23:15:19.084833 kernel: SVE: maximum available vector length 16 bytes per vector Apr 23 23:15:19.084838 kernel: SVE: default vector length 16 bytes per vector Apr 23 23:15:19.084845 kernel: Memory: 3952756K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 220208K reserved, 16384K cma-reserved) Apr 23 23:15:19.084851 kernel: devtmpfs: initialized Apr 23 23:15:19.084856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 23 23:15:19.084861 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 23 23:15:19.084866 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 23 23:15:19.084871 kernel: 0 pages in range for non-PLT usage Apr 23 23:15:19.084876 kernel: 508384 pages in range for PLT usage Apr 23 23:15:19.084880 kernel: pinctrl core: initialized pinctrl subsystem Apr 23 23:15:19.084885 kernel: SMBIOS 3.1.0 present. Apr 23 23:15:19.084892 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 23 23:15:19.084896 kernel: DMI: Memory slots populated: 2/2 Apr 23 23:15:19.084901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 23 23:15:19.084906 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 23 23:15:19.084911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 23 23:15:19.084916 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 23 23:15:19.084921 kernel: audit: initializing netlink subsys (disabled) Apr 23 23:15:19.084925 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Apr 23 23:15:19.084930 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 23 23:15:19.084936 kernel: cpuidle: using governor menu Apr 23 23:15:19.084941 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 23 23:15:19.084946 kernel: ASID allocator initialised with 32768 entries Apr 23 23:15:19.084951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 23 23:15:19.084955 kernel: Serial: AMBA PL011 UART driver Apr 23 23:15:19.084960 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 23 23:15:19.084965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 23 23:15:19.084970 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 23 23:15:19.084975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 23 23:15:19.084981 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 23 23:15:19.084985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 23 23:15:19.084990 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 23 23:15:19.084995 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 23 23:15:19.085000 kernel: ACPI: Added _OSI(Module Device) Apr 23 23:15:19.085004 kernel: ACPI: Added _OSI(Processor Device) Apr 23 23:15:19.085009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 23 23:15:19.085014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 23 23:15:19.085019 kernel: ACPI: Interpreter enabled Apr 23 23:15:19.085025 kernel: ACPI: Using GIC for interrupt routing Apr 23 23:15:19.085029 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 23 23:15:19.085034 kernel: printk: legacy console [ttyAMA0] enabled Apr 23 23:15:19.085039 kernel: printk: legacy bootconsole [pl11] disabled Apr 23 23:15:19.085044 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 23 23:15:19.085049 kernel: ACPI: CPU0 has been hot-added Apr 23 23:15:19.085054 kernel: ACPI: CPU1 has been hot-added Apr 23 23:15:19.085059 kernel: iommu: Default domain type: Translated Apr 23 23:15:19.085064 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 23 23:15:19.085069 kernel: efivars: Registered efivars operations Apr 23 23:15:19.085074 kernel: vgaarb: loaded Apr 23 23:15:19.085079 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 23 23:15:19.085084 kernel: VFS: Disk quotas dquot_6.6.0 Apr 23 23:15:19.085088 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 23 23:15:19.085093 kernel: pnp: PnP ACPI init Apr 23 23:15:19.085098 kernel: pnp: PnP ACPI: found 0 devices Apr 23 23:15:19.085103 kernel: NET: Registered PF_INET protocol family Apr 23 23:15:19.085107 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 23 23:15:19.085112 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 23 23:15:19.085118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 23 23:15:19.085123 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 23 23:15:19.085128 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 23 23:15:19.085133 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 23 23:15:19.085137 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:15:19.085142 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:15:19.085147 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 23 23:15:19.085152 kernel: PCI: CLS 0 bytes, default 64 Apr 23 23:15:19.085157 kernel: kvm [1]: HYP mode not available Apr 23 23:15:19.085162 kernel: Initialise system trusted keyrings Apr 23 23:15:19.085167 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 23 23:15:19.085172 kernel: Key type asymmetric registered Apr 23 23:15:19.085177 kernel: Asymmetric key parser 'x509' registered Apr 23 23:15:19.085182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 23 23:15:19.085186 kernel: io scheduler mq-deadline registered Apr 23 23:15:19.085191 kernel: io scheduler kyber registered Apr 23 23:15:19.085196 kernel: io scheduler bfq registered Apr 23 23:15:19.085201 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 23 23:15:19.085206 kernel: thunder_xcv, ver 1.0 Apr 23 23:15:19.085211 kernel: thunder_bgx, ver 1.0 Apr 23 23:15:19.085216 kernel: nicpf, ver 1.0 Apr 23 23:15:19.085220 kernel: nicvf, ver 1.0 Apr 23 23:15:19.085354 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 23 23:15:19.085407 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-23T23:15:18 UTC (1776986118) Apr 23 23:15:19.085413 kernel: efifb: probing for efifb Apr 23 23:15:19.085420 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 23 23:15:19.085425 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 23 23:15:19.085430 kernel: efifb: scrolling: redraw Apr 23 23:15:19.085435 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 23 23:15:19.085440 kernel: Console: switching to colour frame buffer device 128x48 Apr 23 23:15:19.085445 kernel: fb0: EFI VGA frame buffer device Apr 23 23:15:19.085450 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 23 23:15:19.085454 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 23 23:15:19.085460 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 23 23:15:19.085466 kernel: watchdog: NMI not fully supported Apr 23 23:15:19.085471 kernel: watchdog: Hard watchdog permanently disabled Apr 23 23:15:19.085476 kernel: NET: Registered PF_INET6 protocol family Apr 23 23:15:19.085481 kernel: Segment Routing with IPv6 Apr 23 23:15:19.085486 kernel: In-situ OAM (IOAM) with IPv6 Apr 23 23:15:19.085491 kernel: NET: Registered PF_PACKET protocol family Apr 23 23:15:19.085496 kernel: Key type dns_resolver registered Apr 23 23:15:19.085501 kernel: registered taskstats version 1 Apr 23 23:15:19.085506 kernel: Loading compiled-in X.509 certificates Apr 23 23:15:19.085511 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 1129832e4b4ea3c9ff0dc43e02ec7de2e4d9d907' Apr 23 23:15:19.085517 kernel: Demotion targets for Node 0: null Apr 23 23:15:19.085522 kernel: Key type .fscrypt registered Apr 23 23:15:19.085526 kernel: Key type fscrypt-provisioning registered Apr 23 23:15:19.085531 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 23 23:15:19.085536 kernel: ima: Allocated hash algorithm: sha1 Apr 23 23:15:19.085541 kernel: ima: No architecture policies found Apr 23 23:15:19.085546 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 23 23:15:19.085551 kernel: clk: Disabling unused clocks Apr 23 23:15:19.085556 kernel: PM: genpd: Disabling unused power domains Apr 23 23:15:19.085562 kernel: Warning: unable to open an initial console. Apr 23 23:15:19.085567 kernel: Freeing unused kernel memory: 39552K Apr 23 23:15:19.085571 kernel: Run /init as init process Apr 23 23:15:19.085576 kernel: with arguments: Apr 23 23:15:19.085581 kernel: /init Apr 23 23:15:19.085585 kernel: with environment: Apr 23 23:15:19.085590 kernel: HOME=/ Apr 23 23:15:19.085595 kernel: TERM=linux Apr 23 23:15:19.085601 systemd[1]: Successfully made /usr/ read-only. Apr 23 23:15:19.085609 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:15:19.085615 systemd[1]: Detected virtualization microsoft. Apr 23 23:15:19.085620 systemd[1]: Detected architecture arm64. Apr 23 23:15:19.085625 systemd[1]: Running in initrd. Apr 23 23:15:19.085630 systemd[1]: No hostname configured, using default hostname. Apr 23 23:15:19.085636 systemd[1]: Hostname set to . Apr 23 23:15:19.085641 systemd[1]: Initializing machine ID from random generator. Apr 23 23:15:19.085648 systemd[1]: Queued start job for default target initrd.target. Apr 23 23:15:19.085653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:15:19.085659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:15:19.085665 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 23 23:15:19.085671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:15:19.085676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 23 23:15:19.085682 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 23 23:15:19.085689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 23 23:15:19.085695 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 23 23:15:19.085700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:15:19.085782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:15:19.085789 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:15:19.085795 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:15:19.085800 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:15:19.085805 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:15:19.085812 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:15:19.085817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:15:19.085823 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 23 23:15:19.085828 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 23 23:15:19.085833 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:15:19.085838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:15:19.085844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:15:19.085849 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:15:19.085854 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 23 23:15:19.085860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:15:19.085865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 23 23:15:19.085871 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 23 23:15:19.085876 systemd[1]: Starting systemd-fsck-usr.service... Apr 23 23:15:19.085881 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:15:19.085887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:15:19.085909 systemd-journald[225]: Collecting audit messages is disabled. Apr 23 23:15:19.085923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:19.085930 systemd-journald[225]: Journal started Apr 23 23:15:19.085945 systemd-journald[225]: Runtime Journal (/run/log/journal/935579f3795848a4bdacb10edc76f745) is 8M, max 78.3M, 70.3M free. Apr 23 23:15:19.086982 systemd-modules-load[227]: Inserted module 'overlay' Apr 23 23:15:19.105733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 23 23:15:19.105769 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:15:19.118683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 23 23:15:19.129753 kernel: Bridge firewalling registered Apr 23 23:15:19.123893 systemd-modules-load[227]: Inserted module 'br_netfilter' Apr 23 23:15:19.124729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:15:19.141880 systemd[1]: Finished systemd-fsck-usr.service. Apr 23 23:15:19.155266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:15:19.160523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:19.171648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 23 23:15:19.193865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:15:19.200855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:15:19.220313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:15:19.241120 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:15:19.247236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:15:19.254319 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 23 23:15:19.261182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:15:19.270876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:15:19.284758 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 23 23:15:19.306638 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:15:19.313598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:15:19.336863 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:15:19.330613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:15:19.385555 systemd-resolved[263]: Positive Trust Anchors: Apr 23 23:15:19.385571 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:15:19.385591 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:15:19.434901 kernel: SCSI subsystem initialized Apr 23 23:15:19.434926 kernel: Loading iSCSI transport class v2.0-870. Apr 23 23:15:19.387329 systemd-resolved[263]: Defaulting to hostname 'linux'. Apr 23 23:15:19.388874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:15:19.399963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:15:19.452719 kernel: iscsi: registered transport (tcp) Apr 23 23:15:19.467034 kernel: iscsi: registered transport (qla4xxx) Apr 23 23:15:19.467082 kernel: QLogic iSCSI HBA Driver Apr 23 23:15:19.480305 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:15:19.495148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:15:19.508436 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:15:19.552475 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 23 23:15:19.559834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 23 23:15:19.619730 kernel: raid6: neonx8 gen() 18540 MB/s Apr 23 23:15:19.638716 kernel: raid6: neonx4 gen() 18569 MB/s Apr 23 23:15:19.657714 kernel: raid6: neonx2 gen() 17066 MB/s Apr 23 23:15:19.677715 kernel: raid6: neonx1 gen() 15112 MB/s Apr 23 23:15:19.696713 kernel: raid6: int64x8 gen() 10546 MB/s Apr 23 23:15:19.715713 kernel: raid6: int64x4 gen() 10620 MB/s Apr 23 23:15:19.735738 kernel: raid6: int64x2 gen() 8997 MB/s Apr 23 23:15:19.756928 kernel: raid6: int64x1 gen() 7059 MB/s Apr 23 23:15:19.756939 kernel: raid6: using algorithm neonx4 gen() 18569 MB/s Apr 23 23:15:19.778934 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Apr 23 23:15:19.778941 kernel: raid6: using neon recovery algorithm Apr 23 23:15:19.786956 kernel: xor: measuring software checksum speed Apr 23 23:15:19.786966 kernel: 8regs : 28615 MB/sec Apr 23 23:15:19.789596 kernel: 32regs : 28816 MB/sec Apr 23 23:15:19.795859 kernel: arm64_neon : 34559 MB/sec Apr 23 23:15:19.795866 kernel: xor: using function: arm64_neon (34559 MB/sec) Apr 23 23:15:19.833725 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 23 23:15:19.839388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:15:19.849867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:15:19.874859 systemd-udevd[474]: Using default interface naming scheme 'v255'. Apr 23 23:15:19.878988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:15:19.891757 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 23 23:15:19.919164 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Apr 23 23:15:19.941052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:15:19.947265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:15:19.994338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:15:20.001454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 23 23:15:20.078197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:15:20.078266 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:20.107993 kernel: hv_vmbus: Vmbus version:5.3 Apr 23 23:15:20.108011 kernel: hv_vmbus: registering driver hid_hyperv Apr 23 23:15:20.108018 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 23 23:15:20.108033 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 23 23:15:20.096075 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:20.122399 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 23 23:15:20.122436 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 23 23:15:20.131720 kernel: hv_vmbus: registering driver hv_netvsc Apr 23 23:15:20.135592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:20.153290 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 23 23:15:20.153311 kernel: hv_vmbus: registering driver hv_storvsc Apr 23 23:15:20.153319 kernel: scsi host1: storvsc_host_t Apr 23 23:15:20.153457 kernel: scsi host0: storvsc_host_t Apr 23 23:15:20.153531 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 23 23:15:20.151177 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:15:20.172121 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 23 23:15:20.173229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:15:20.184864 kernel: PTP clock support registered Apr 23 23:15:20.173305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:20.199232 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 23 23:15:20.194860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:20.215311 kernel: hv_utils: Registering HyperV Utility Driver Apr 23 23:15:20.215352 kernel: hv_vmbus: registering driver hv_utils Apr 23 23:15:20.217041 kernel: hv_utils: Heartbeat IC version 3.0 Apr 23 23:15:20.221021 kernel: hv_utils: Shutdown IC version 3.2 Apr 23 23:15:20.223827 kernel: hv_utils: TimeSync IC version 4.0 Apr 23 23:15:19.934119 systemd-resolved[263]: Clock change detected. Flushing caches. Apr 23 23:15:19.948530 systemd-journald[225]: Time jumped backwards, rotating. Apr 23 23:15:19.948563 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 23 23:15:19.957199 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 23 23:15:19.957358 kernel: hv_netvsc 002248b3-9b62-0022-48b3-9b62002248b3 eth0: VF slot 1 added Apr 23 23:15:19.952389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:19.976545 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 23 23:15:19.976674 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 23 23:15:19.976756 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 23 23:15:19.989699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:15:19.994740 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 23 23:15:19.994931 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 23 23:15:19.999822 kernel: hv_vmbus: registering driver hv_pci Apr 23 23:15:19.999854 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 23 23:15:20.008126 kernel: hv_pci 9a5113d3-7e74-4459-a501-a0a79305e065: PCI VMBus probing: Using version 0x10004 Apr 23 23:15:20.008743 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 23 23:15:20.018733 kernel: hv_pci 9a5113d3-7e74-4459-a501-a0a79305e065: PCI host bridge to bus 7e74:00 Apr 23 23:15:20.018911 kernel: pci_bus 7e74:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 23 23:15:20.023725 kernel: pci_bus 7e74:00: No busn resource found for root bus, will use [bus 00-ff] Apr 23 23:15:20.029710 kernel: pci 7e74:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Apr 23 23:15:20.035691 kernel: pci 7e74:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 23 23:15:20.046036 kernel: pci 7e74:00:02.0: enabling Extended Tags Apr 23 23:15:20.046104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 23 23:15:20.062773 kernel: pci 7e74:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7e74:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Apr 23 23:15:20.072634 kernel: pci_bus 7e74:00: busn_res: [bus 00-ff] end is updated to 00 Apr 23 23:15:20.072822 kernel: pci 7e74:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Apr 23 23:15:20.085754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#229 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 23 23:15:20.141716 kernel: mlx5_core 7e74:00:02.0: enabling device (0000 -> 0002) Apr 23 23:15:20.149747 kernel: mlx5_core 7e74:00:02.0: PTM is not supported by PCIe Apr 23 23:15:20.149913 kernel: mlx5_core 7e74:00:02.0: firmware version: 16.30.5026 Apr 23 23:15:20.319840 kernel: hv_netvsc 002248b3-9b62-0022-48b3-9b62002248b3 eth0: VF registering: eth1 Apr 23 23:15:20.325702 kernel: mlx5_core 7e74:00:02.0 eth1: joined to eth0 Apr 23 23:15:20.331692 kernel: mlx5_core 7e74:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 23 23:15:20.341781 kernel: mlx5_core 7e74:00:02.0 enP32372s1: renamed from eth1 Apr 23 23:15:20.466494 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 23 23:15:20.554646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 23 23:15:20.589737 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 23 23:15:20.637279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 23 23:15:20.642764 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 23 23:15:20.655272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 23 23:15:20.665087 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:15:20.673813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:15:20.683350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:15:20.692940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 23 23:15:20.709720 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 23 23:15:20.727262 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:15:20.742700 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:15:21.765604 disk-uuid[664]: The operation has completed successfully. Apr 23 23:15:21.769963 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:15:21.836829 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 23 23:15:21.838700 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 23 23:15:21.865558 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 23 23:15:21.887190 sh[822]: Success Apr 23 23:15:21.921908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 23 23:15:21.921965 kernel: device-mapper: uevent: version 1.0.3 Apr 23 23:15:21.926780 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 23 23:15:21.935701 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 23 23:15:22.197041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 23 23:15:22.207068 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 23 23:15:22.212122 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 23 23:15:22.239723 kernel: BTRFS: device fsid 2db32ba8-c7e9-4b6a-ba75-58982c25581e devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (840) Apr 23 23:15:22.249687 kernel: BTRFS info (device dm-0): first mount of filesystem 2db32ba8-c7e9-4b6a-ba75-58982c25581e Apr 23 23:15:22.249708 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:15:22.559883 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 23 23:15:22.559975 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 23 23:15:22.625671 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 23 23:15:22.630630 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:15:22.638514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 23 23:15:22.639271 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 23 23:15:22.661487 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 23 23:15:22.693704 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (863) Apr 23 23:15:22.704777 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:15:22.704821 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:15:22.732104 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:15:22.732169 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:15:22.741709 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:15:22.744720 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 23 23:15:22.752840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 23 23:15:22.781775 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:15:22.793818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:15:22.824501 systemd-networkd[1009]: lo: Link UP Apr 23 23:15:22.824515 systemd-networkd[1009]: lo: Gained carrier Apr 23 23:15:22.825315 systemd-networkd[1009]: Enumeration completed Apr 23 23:15:22.827350 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:15:22.830751 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:15:22.830755 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:15:22.835240 systemd[1]: Reached target network.target - Network. Apr 23 23:15:22.904878 kernel: mlx5_core 7e74:00:02.0 enP32372s1: Link up Apr 23 23:15:22.938769 kernel: hv_netvsc 002248b3-9b62-0022-48b3-9b62002248b3 eth0: Data path switched to VF: enP32372s1 Apr 23 23:15:22.939190 systemd-networkd[1009]: enP32372s1: Link UP Apr 23 23:15:22.939248 systemd-networkd[1009]: eth0: Link UP Apr 23 23:15:22.939381 systemd-networkd[1009]: eth0: Gained carrier Apr 23 23:15:22.939396 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:15:22.957437 systemd-networkd[1009]: enP32372s1: Gained carrier Apr 23 23:15:22.967723 systemd-networkd[1009]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 23 23:15:23.940718 ignition[978]: Ignition 2.22.0 Apr 23 23:15:23.940727 ignition[978]: Stage: fetch-offline Apr 23 23:15:23.944915 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:15:23.940821 ignition[978]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:23.952826 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 23 23:15:23.940827 ignition[978]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:23.940896 ignition[978]: parsed url from cmdline: "" Apr 23 23:15:23.940899 ignition[978]: no config URL provided Apr 23 23:15:23.940902 ignition[978]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:15:23.940907 ignition[978]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:15:23.940911 ignition[978]: failed to fetch config: resource requires networking Apr 23 23:15:23.941127 ignition[978]: Ignition finished successfully Apr 23 23:15:23.988987 ignition[1018]: Ignition 2.22.0 Apr 23 23:15:23.988992 ignition[1018]: Stage: fetch Apr 23 23:15:23.989231 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:23.989238 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:23.989325 ignition[1018]: parsed url from cmdline: "" Apr 23 23:15:23.989328 ignition[1018]: no config URL provided Apr 23 23:15:23.989331 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:15:23.989339 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:15:23.989354 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 23 23:15:24.081719 ignition[1018]: GET result: OK Apr 23 23:15:24.082757 ignition[1018]: config has been read from IMDS userdata Apr 23 23:15:24.082797 ignition[1018]: parsing config with SHA512: b87193687b8edffbe70529d049e930474497b3b810cc308f74a76dccc6a19d18c975609d0ccefce37fe2697f3627901de4897e5c03de802081b3ca857befb3d5 Apr 23 23:15:24.086214 unknown[1018]: fetched base config from "system" Apr 23 23:15:24.086485 ignition[1018]: fetch: fetch complete Apr 23 23:15:24.086219 unknown[1018]: fetched base config from "system" Apr 23 23:15:24.086489 ignition[1018]: fetch: fetch passed Apr 23 23:15:24.086222 unknown[1018]: fetched user config from "azure" Apr 23 23:15:24.086538 ignition[1018]: Ignition finished successfully Apr 23 23:15:24.091743 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 23 23:15:24.098278 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 23 23:15:24.133329 ignition[1025]: Ignition 2.22.0 Apr 23 23:15:24.133346 ignition[1025]: Stage: kargs Apr 23 23:15:24.137441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 23 23:15:24.133516 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:24.144469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 23 23:15:24.133524 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:24.134030 ignition[1025]: kargs: kargs passed Apr 23 23:15:24.134073 ignition[1025]: Ignition finished successfully Apr 23 23:15:24.181207 ignition[1031]: Ignition 2.22.0 Apr 23 23:15:24.181224 ignition[1031]: Stage: disks Apr 23 23:15:24.185106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 23 23:15:24.181407 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:24.192255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 23 23:15:24.181413 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:24.200464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 23 23:15:24.182037 ignition[1031]: disks: disks passed Apr 23 23:15:24.209718 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:15:24.182078 ignition[1031]: Ignition finished successfully Apr 23 23:15:24.218425 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:15:24.227406 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:15:24.237477 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 23 23:15:24.320087 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Apr 23 23:15:24.328636 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 23 23:15:24.335993 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 23 23:15:24.415890 systemd-networkd[1009]: eth0: Gained IPv6LL Apr 23 23:15:24.625746 kernel: EXT4-fs (sda9): mounted filesystem 753efcb9-de86-4e47-981f-2dbd4690452d r/w with ordered data mode. Quota mode: none. Apr 23 23:15:24.626805 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 23 23:15:24.633185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 23 23:15:24.655623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:15:24.671783 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 23 23:15:24.681455 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 23 23:15:24.693028 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 23 23:15:24.693066 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:15:24.709938 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 23 23:15:24.723422 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 23 23:15:24.745698 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1053) Apr 23 23:15:24.757025 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:15:24.757062 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:15:24.766789 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:15:24.766845 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:15:24.768428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:15:25.242937 coreos-metadata[1055]: Apr 23 23:15:25.242 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 23 23:15:25.249634 coreos-metadata[1055]: Apr 23 23:15:25.246 INFO Fetch successful Apr 23 23:15:25.249634 coreos-metadata[1055]: Apr 23 23:15:25.246 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 23 23:15:25.261996 coreos-metadata[1055]: Apr 23 23:15:25.254 INFO Fetch successful Apr 23 23:15:25.269903 coreos-metadata[1055]: Apr 23 23:15:25.269 INFO wrote hostname ci-4459.2.4-n-357a044314 to /sysroot/etc/hostname Apr 23 23:15:25.277409 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:15:25.482048 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Apr 23 23:15:25.504319 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Apr 23 23:15:25.511006 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Apr 23 23:15:25.518374 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Apr 23 23:15:26.557152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 23 23:15:26.562645 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 23 23:15:26.579260 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 23 23:15:26.590034 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 23 23:15:26.599114 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:15:26.622111 ignition[1173]: INFO : Ignition 2.22.0 Apr 23 23:15:26.622111 ignition[1173]: INFO : Stage: mount Apr 23 23:15:26.622111 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:26.622111 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:26.647754 ignition[1173]: INFO : mount: mount passed Apr 23 23:15:26.647754 ignition[1173]: INFO : Ignition finished successfully Apr 23 23:15:26.625031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 23 23:15:26.630140 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 23 23:15:26.638703 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 23 23:15:26.667959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:15:26.699698 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Apr 23 23:15:26.709688 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:15:26.709727 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:15:26.719229 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:15:26.719265 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:15:26.720990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:15:26.751011 ignition[1202]: INFO : Ignition 2.22.0 Apr 23 23:15:26.754181 ignition[1202]: INFO : Stage: files Apr 23 23:15:26.754181 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:26.754181 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:26.765749 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Apr 23 23:15:26.783491 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 23 23:15:26.783491 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 23 23:15:26.853332 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 23 23:15:26.859368 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 23 23:15:26.859368 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 23 23:15:26.853636 unknown[1202]: wrote ssh authorized keys file for user: core Apr 23 23:15:26.884545 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:15:26.892440 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 23 23:15:26.943926 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 23 23:15:27.362878 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:15:27.371067 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 23 23:15:27.834904 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 23 23:15:29.464094 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:15:29.464094 ignition[1202]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 23 23:15:30.118794 ignition[1202]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:15:30.129437 ignition[1202]: INFO : files: files passed Apr 23 23:15:30.129437 ignition[1202]: INFO : Ignition finished successfully Apr 23 23:15:30.129617 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 23 23:15:30.144875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 23 23:15:30.169261 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 23 23:15:30.193367 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 23 23:15:30.198220 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 23 23:15:30.233836 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:15:30.233836 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:15:30.229926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:15:30.272075 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:15:30.240305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 23 23:15:30.253537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 23 23:15:30.305534 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 23 23:15:30.305629 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 23 23:15:30.315844 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 23 23:15:30.325928 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 23 23:15:30.334957 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 23 23:15:30.335740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 23 23:15:30.369196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:15:30.376884 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 23 23:15:30.400266 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:15:30.405875 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:15:30.415314 systemd[1]: Stopped target timers.target - Timer Units. Apr 23 23:15:30.425005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 23 23:15:30.425119 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:15:30.438361 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 23 23:15:30.443062 systemd[1]: Stopped target basic.target - Basic System. Apr 23 23:15:30.452475 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 23 23:15:30.461502 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:15:30.470414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 23 23:15:30.479242 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:15:30.489844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 23 23:15:30.498286 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:15:30.509157 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 23 23:15:30.518263 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 23 23:15:30.527598 systemd[1]: Stopped target swap.target - Swaps. Apr 23 23:15:30.535673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 23 23:15:30.535788 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:15:30.547527 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:15:30.552700 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:15:30.561625 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 23 23:15:30.566335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:15:30.572421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 23 23:15:30.572524 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 23 23:15:30.587719 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 23 23:15:30.587804 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:15:30.593699 systemd[1]: ignition-files.service: Deactivated successfully. Apr 23 23:15:30.593770 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 23 23:15:30.602779 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 23 23:15:30.674480 ignition[1256]: INFO : Ignition 2.22.0 Apr 23 23:15:30.674480 ignition[1256]: INFO : Stage: umount Apr 23 23:15:30.674480 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:15:30.674480 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 23 23:15:30.674480 ignition[1256]: INFO : umount: umount passed Apr 23 23:15:30.674480 ignition[1256]: INFO : Ignition finished successfully Apr 23 23:15:30.602843 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:15:30.619848 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 23 23:15:30.634070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 23 23:15:30.634195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:15:30.650212 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 23 23:15:30.663181 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 23 23:15:30.663348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:15:30.670016 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 23 23:15:30.670125 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:15:30.685434 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 23 23:15:30.686194 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 23 23:15:30.686526 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 23 23:15:30.694083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 23 23:15:30.694169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 23 23:15:30.702960 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 23 23:15:30.703000 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 23 23:15:30.709307 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 23 23:15:30.709348 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 23 23:15:30.714378 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 23 23:15:30.714417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 23 23:15:30.724725 systemd[1]: Stopped target network.target - Network. Apr 23 23:15:30.732163 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 23 23:15:30.732214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:15:30.742250 systemd[1]: Stopped target paths.target - Path Units. Apr 23 23:15:30.751509 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 23 23:15:30.760696 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:15:30.770937 systemd[1]: Stopped target slices.target - Slice Units. Apr 23 23:15:30.778488 systemd[1]: Stopped target sockets.target - Socket Units. Apr 23 23:15:30.786767 systemd[1]: iscsid.socket: Deactivated successfully. Apr 23 23:15:30.786817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:15:30.796245 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 23 23:15:30.796274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:15:30.805056 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 23 23:15:30.805106 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 23 23:15:30.813853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 23 23:15:30.813879 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 23 23:15:30.822572 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 23 23:15:30.830738 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 23 23:15:30.839721 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 23 23:15:30.839798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 23 23:15:31.058475 kernel: hv_netvsc 002248b3-9b62-0022-48b3-9b62002248b3 eth0: Data path switched from VF: enP32372s1 Apr 23 23:15:30.848941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 23 23:15:30.849224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 23 23:15:30.860860 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 23 23:15:30.860985 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 23 23:15:30.875341 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 23 23:15:30.875513 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 23 23:15:30.875613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 23 23:15:30.889413 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 23 23:15:30.890113 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 23 23:15:30.899608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 23 23:15:30.899649 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:15:30.909261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 23 23:15:30.922477 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 23 23:15:30.922539 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:15:30.933596 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 23 23:15:30.933640 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:15:30.951922 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 23 23:15:30.951965 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 23 23:15:30.956980 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 23 23:15:30.957026 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:15:30.969746 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:15:30.975659 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 23 23:15:30.975742 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:15:30.996375 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 23 23:15:30.996566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:15:31.005590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 23 23:15:31.005625 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 23 23:15:31.014919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 23 23:15:31.014949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:15:31.024269 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 23 23:15:31.024314 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:15:31.038368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 23 23:15:31.038423 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 23 23:15:31.058568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 23 23:15:31.058623 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:15:31.069459 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 23 23:15:31.086189 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 23 23:15:31.086253 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:15:31.096602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 23 23:15:31.310490 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Apr 23 23:15:31.096647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:15:31.108754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:15:31.108806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:31.119563 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 23 23:15:31.119608 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 23 23:15:31.119633 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:15:31.119947 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 23 23:15:31.120038 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 23 23:15:31.127696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 23 23:15:31.127769 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 23 23:15:31.138573 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 23 23:15:31.147459 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 23 23:15:31.179573 systemd[1]: Switching root. Apr 23 23:15:31.361636 systemd-journald[225]: Journal stopped Apr 23 23:15:35.841875 kernel: SELinux: policy capability network_peer_controls=1 Apr 23 23:15:35.841899 kernel: SELinux: policy capability open_perms=1 Apr 23 23:15:35.841907 kernel: SELinux: policy capability extended_socket_class=1 Apr 23 23:15:35.841912 kernel: SELinux: policy capability always_check_network=0 Apr 23 23:15:35.841917 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 23 23:15:35.841925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 23 23:15:35.841931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 23 23:15:35.841936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 23 23:15:35.841941 kernel: SELinux: policy capability userspace_initial_context=0 Apr 23 23:15:35.841947 kernel: audit: type=1403 audit(1776986132.119:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 23 23:15:35.841954 systemd[1]: Successfully loaded SELinux policy in 198.397ms. Apr 23 23:15:35.841962 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.349ms. Apr 23 23:15:35.841969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:15:35.841975 systemd[1]: Detected virtualization microsoft. Apr 23 23:15:35.841981 systemd[1]: Detected architecture arm64. Apr 23 23:15:35.841987 systemd[1]: Detected first boot. Apr 23 23:15:35.841996 systemd[1]: Hostname set to . Apr 23 23:15:35.842002 systemd[1]: Initializing machine ID from random generator. Apr 23 23:15:35.842008 zram_generator::config[1299]: No configuration found. Apr 23 23:15:35.842014 kernel: NET: Registered PF_VSOCK protocol family Apr 23 23:15:35.842020 systemd[1]: Populated /etc with preset unit settings. Apr 23 23:15:35.842026 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 23 23:15:35.842032 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 23 23:15:35.842039 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 23 23:15:35.842045 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 23 23:15:35.842051 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 23 23:15:35.842057 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 23 23:15:35.842063 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 23 23:15:35.842069 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 23 23:15:35.842075 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 23 23:15:35.842082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 23 23:15:35.842088 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 23 23:15:35.842094 systemd[1]: Created slice user.slice - User and Session Slice. Apr 23 23:15:35.842100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:15:35.842106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:15:35.842112 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 23 23:15:35.842118 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 23 23:15:35.842125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 23 23:15:35.842132 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:15:35.842138 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 23 23:15:35.842146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:15:35.842152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:15:35.842158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 23 23:15:35.842164 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 23 23:15:35.842170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 23 23:15:35.842176 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 23 23:15:35.842183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:15:35.842190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:15:35.842196 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:15:35.842202 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:15:35.842208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 23 23:15:35.842214 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 23 23:15:35.842222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 23 23:15:35.842228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:15:35.842234 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:15:35.842240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:15:35.842247 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 23 23:15:35.842253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 23 23:15:35.842259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 23 23:15:35.842267 systemd[1]: Mounting media.mount - External Media Directory... Apr 23 23:15:35.842273 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 23 23:15:35.842279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 23 23:15:35.842285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 23 23:15:35.842292 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 23 23:15:35.842298 systemd[1]: Reached target machines.target - Containers. Apr 23 23:15:35.842304 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 23 23:15:35.842310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:15:35.842318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:15:35.842324 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 23 23:15:35.842330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:15:35.842336 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:15:35.842342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:15:35.842348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 23 23:15:35.842354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:15:35.842361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 23 23:15:35.842367 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 23 23:15:35.842374 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 23 23:15:35.842380 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 23 23:15:35.842386 systemd[1]: Stopped systemd-fsck-usr.service. Apr 23 23:15:35.842393 kernel: fuse: init (API version 7.41) Apr 23 23:15:35.842399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:15:35.842405 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:15:35.842411 kernel: loop: module loaded Apr 23 23:15:35.842417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:15:35.842424 kernel: ACPI: bus type drm_connector registered Apr 23 23:15:35.842430 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:15:35.842436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 23 23:15:35.842443 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 23 23:15:35.842467 systemd-journald[1382]: Collecting audit messages is disabled. Apr 23 23:15:35.842483 systemd-journald[1382]: Journal started Apr 23 23:15:35.842498 systemd-journald[1382]: Runtime Journal (/run/log/journal/2d53706eb36a41409ebe77e379fb701f) is 8M, max 78.3M, 70.3M free. Apr 23 23:15:35.061592 systemd[1]: Queued start job for default target multi-user.target. Apr 23 23:15:35.066165 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 23 23:15:35.066531 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 23 23:15:35.066806 systemd[1]: systemd-journald.service: Consumed 2.489s CPU time. Apr 23 23:15:35.855089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:15:35.866033 systemd[1]: verity-setup.service: Deactivated successfully. Apr 23 23:15:35.866081 systemd[1]: Stopped verity-setup.service. Apr 23 23:15:35.876737 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:15:35.881799 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 23 23:15:35.886095 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 23 23:15:35.890589 systemd[1]: Mounted media.mount - External Media Directory. Apr 23 23:15:35.895346 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 23 23:15:35.900190 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 23 23:15:35.904760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 23 23:15:35.910884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 23 23:15:35.915902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:15:35.921182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 23 23:15:35.921315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 23 23:15:35.926544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:15:35.926667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:15:35.931457 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:15:35.931576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:15:35.936115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:15:35.936224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:15:35.941880 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 23 23:15:35.942003 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 23 23:15:35.946538 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:15:35.946650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:15:35.951640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:15:35.956503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:15:35.962148 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 23 23:15:35.967489 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 23 23:15:35.973290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:15:35.988825 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:15:35.994953 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 23 23:15:36.010785 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 23 23:15:36.015546 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 23 23:15:36.015642 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:15:36.021525 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 23 23:15:36.028069 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 23 23:15:36.032498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:15:36.040523 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 23 23:15:36.046092 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 23 23:15:36.051157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:15:36.052133 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 23 23:15:36.056987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:15:36.059817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:15:36.065020 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 23 23:15:36.073943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 23 23:15:36.080571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 23 23:15:36.088060 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 23 23:15:36.097059 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 23 23:15:36.104810 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 23 23:15:36.111070 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 23 23:15:36.124703 kernel: loop0: detected capacity change from 0 to 100632 Apr 23 23:15:36.127627 systemd-journald[1382]: Time spent on flushing to /var/log/journal/2d53706eb36a41409ebe77e379fb701f is 37.657ms for 935 entries. Apr 23 23:15:36.127627 systemd-journald[1382]: System Journal (/var/log/journal/2d53706eb36a41409ebe77e379fb701f) is 11.8M, max 2.6G, 2.6G free. Apr 23 23:15:36.212286 systemd-journald[1382]: Received client request to flush runtime journal. Apr 23 23:15:36.212345 systemd-journald[1382]: /var/log/journal/2d53706eb36a41409ebe77e379fb701f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Apr 23 23:15:36.212362 systemd-journald[1382]: Rotating system journal. Apr 23 23:15:36.178622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:15:36.214989 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 23 23:15:36.223762 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 23 23:15:36.230804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:15:36.244157 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 23 23:15:36.245810 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 23 23:15:36.353056 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Apr 23 23:15:36.353071 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Apr 23 23:15:36.355899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:15:36.565705 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 23 23:15:36.601710 kernel: loop1: detected capacity change from 0 to 119840 Apr 23 23:15:36.681098 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 23 23:15:36.688113 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:15:36.715123 systemd-udevd[1460]: Using default interface naming scheme 'v255'. Apr 23 23:15:36.969222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:15:36.981823 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:15:36.984698 kernel: loop2: detected capacity change from 0 to 209336 Apr 23 23:15:37.036813 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 23 23:15:37.090703 kernel: loop3: detected capacity change from 0 to 27936 Apr 23 23:15:37.095765 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 23 23:15:37.110697 kernel: mousedev: PS/2 mouse device common for all mice Apr 23 23:15:37.153358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 23 23:15:37.153613 kernel: hv_vmbus: registering driver hv_balloon Apr 23 23:15:37.157559 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 23 23:15:37.171444 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 23 23:15:37.171520 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 23 23:15:37.198123 kernel: hv_vmbus: registering driver hyperv_fb Apr 23 23:15:37.198216 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 23 23:15:37.204965 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 23 23:15:37.214035 kernel: Console: switching to colour dummy device 80x25 Apr 23 23:15:37.219704 kernel: Console: switching to colour frame buffer device 128x48 Apr 23 23:15:37.302778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:37.323381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:15:37.324497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:37.324709 kernel: MACsec IEEE 802.1AE Apr 23 23:15:37.330781 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:15:37.333962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:15:37.335168 systemd-networkd[1464]: lo: Link UP Apr 23 23:15:37.335175 systemd-networkd[1464]: lo: Gained carrier Apr 23 23:15:37.337444 systemd-networkd[1464]: Enumeration completed Apr 23 23:15:37.338882 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:15:37.338959 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:15:37.340060 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:15:37.347543 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 23 23:15:37.356600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 23 23:15:37.393698 kernel: mlx5_core 7e74:00:02.0 enP32372s1: Link up Apr 23 23:15:37.423289 kernel: hv_netvsc 002248b3-9b62-0022-48b3-9b62002248b3 eth0: Data path switched to VF: enP32372s1 Apr 23 23:15:37.422790 systemd-networkd[1464]: enP32372s1: Link UP Apr 23 23:15:37.422900 systemd-networkd[1464]: eth0: Link UP Apr 23 23:15:37.422903 systemd-networkd[1464]: eth0: Gained carrier Apr 23 23:15:37.422921 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:15:37.427015 systemd-networkd[1464]: enP32372s1: Gained carrier Apr 23 23:15:37.430352 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 23 23:15:37.437811 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 23 23:15:37.449114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 23 23:15:37.455995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 23 23:15:37.477699 kernel: loop4: detected capacity change from 0 to 100632 Apr 23 23:15:37.491703 kernel: loop5: detected capacity change from 0 to 119840 Apr 23 23:15:37.503699 kernel: loop6: detected capacity change from 0 to 209336 Apr 23 23:15:37.508722 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 23 23:15:37.522888 kernel: loop7: detected capacity change from 0 to 27936 Apr 23 23:15:37.529977 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 23 23:15:37.530733 (sd-merge)[1605]: Merged extensions into '/usr'. Apr 23 23:15:37.534135 systemd[1]: Reload requested from client PID 1438 ('systemd-sysext') (unit systemd-sysext.service)... Apr 23 23:15:37.534154 systemd[1]: Reloading... Apr 23 23:15:37.584701 zram_generator::config[1633]: No configuration found. Apr 23 23:15:37.754304 systemd[1]: Reloading finished in 219 ms. Apr 23 23:15:37.768847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 23 23:15:37.778592 systemd[1]: Starting ensure-sysext.service... Apr 23 23:15:37.783799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:15:37.797610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:15:37.803266 systemd[1]: Reload requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Apr 23 23:15:37.803279 systemd[1]: Reloading... Apr 23 23:15:37.804230 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 23 23:15:37.804289 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 23 23:15:37.804487 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 23 23:15:37.805140 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 23 23:15:37.805730 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 23 23:15:37.806224 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Apr 23 23:15:37.806264 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Apr 23 23:15:37.826107 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:15:37.826117 systemd-tmpfiles[1694]: Skipping /boot Apr 23 23:15:37.833011 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:15:37.833120 systemd-tmpfiles[1694]: Skipping /boot Apr 23 23:15:37.851753 zram_generator::config[1725]: No configuration found. Apr 23 23:15:38.017237 systemd[1]: Reloading finished in 213 ms. Apr 23 23:15:38.027317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:15:38.054869 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:15:38.105886 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 23 23:15:38.111546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:15:38.117890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:15:38.124183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:15:38.135733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:15:38.140509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:15:38.140609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:15:38.142875 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 23 23:15:38.150928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:15:38.159441 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 23 23:15:38.166060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:15:38.166235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:15:38.171490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:15:38.174826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:15:38.182291 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:15:38.182432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:15:38.191778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:15:38.194607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:15:38.204419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:15:38.213107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:15:38.219667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:15:38.220157 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:15:38.223512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:15:38.224824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:15:38.232469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:15:38.233593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:15:38.241588 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:15:38.242227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:15:38.252079 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 23 23:15:38.262500 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 23 23:15:38.271363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:15:38.274872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:15:38.285663 systemd-resolved[1797]: Positive Trust Anchors: Apr 23 23:15:38.287294 systemd-resolved[1797]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:15:38.287318 systemd-resolved[1797]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:15:38.287395 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:15:38.292472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:15:38.302472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:15:38.308010 systemd-resolved[1797]: Using system hostname 'ci-4459.2.4-n-357a044314'. Apr 23 23:15:38.309063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:15:38.309105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:15:38.309145 systemd[1]: Reached target time-set.target - System Time Set. Apr 23 23:15:38.314072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:15:38.319535 systemd[1]: Finished ensure-sysext.service. Apr 23 23:15:38.320453 augenrules[1828]: No rules Apr 23 23:15:38.323454 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:15:38.323624 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:15:38.327985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:15:38.328118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:15:38.333515 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:15:38.333654 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:15:38.338043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:15:38.338161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:15:38.344286 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:15:38.344433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:15:38.354051 systemd[1]: Reached target network.target - Network. Apr 23 23:15:38.358372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:15:38.363675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:15:38.363789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:15:38.495787 systemd-networkd[1464]: eth0: Gained IPv6LL Apr 23 23:15:38.500522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 23 23:15:38.507324 systemd[1]: Reached target network-online.target - Network is Online. Apr 23 23:15:38.900794 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 23 23:15:38.906411 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 23 23:15:41.660391 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 23 23:15:41.680353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 23 23:15:41.686795 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 23 23:15:41.700306 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 23 23:15:41.705863 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:15:41.710746 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 23 23:15:41.716385 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 23 23:15:41.722548 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 23 23:15:41.727408 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 23 23:15:41.733447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 23 23:15:41.739335 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 23 23:15:41.739363 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:15:41.743148 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:15:41.748053 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 23 23:15:41.754564 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 23 23:15:41.760624 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 23 23:15:41.766191 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 23 23:15:41.772439 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 23 23:15:41.778525 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 23 23:15:41.783334 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 23 23:15:41.789306 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 23 23:15:41.794362 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:15:41.798514 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:15:41.803551 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:15:41.803580 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:15:41.821341 systemd[1]: Starting chronyd.service - NTP client/server... Apr 23 23:15:41.832784 systemd[1]: Starting containerd.service - containerd container runtime... Apr 23 23:15:41.840599 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 23 23:15:41.848831 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 23 23:15:41.856627 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 23 23:15:41.867796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 23 23:15:41.880879 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 23 23:15:41.885219 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 23 23:15:41.896841 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 23 23:15:41.897317 jq[1854]: false Apr 23 23:15:41.902593 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 23 23:15:41.903808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:15:41.916814 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 23 23:15:41.921958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 23 23:15:41.928801 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 23 23:15:41.934261 KVP[1856]: KVP starting; pid is:1856 Apr 23 23:15:41.938251 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 23 23:15:41.937878 chronyd[1846]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Apr 23 23:15:41.944846 KVP[1856]: KVP LIC Version: 3.1 Apr 23 23:15:41.949372 kernel: hv_utils: KVP IC version 4.0 Apr 23 23:15:41.951697 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 23 23:15:41.957883 chronyd[1846]: Timezone right/UTC failed leap second check, ignoring Apr 23 23:15:41.958238 chronyd[1846]: Loaded seccomp filter (level 2) Apr 23 23:15:41.958351 extend-filesystems[1855]: Found /dev/sda6 Apr 23 23:15:41.963813 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 23 23:15:41.971813 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 23 23:15:41.976224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 23 23:15:41.976995 systemd[1]: Starting update-engine.service - Update Engine... Apr 23 23:15:41.983551 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 23 23:15:41.988774 extend-filesystems[1855]: Found /dev/sda9 Apr 23 23:15:41.995897 extend-filesystems[1855]: Checking size of /dev/sda9 Apr 23 23:15:41.989639 systemd[1]: Started chronyd.service - NTP client/server. Apr 23 23:15:42.005232 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 23 23:15:42.008939 jq[1877]: true Apr 23 23:15:42.016868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 23 23:15:42.017037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 23 23:15:42.018306 systemd[1]: motdgen.service: Deactivated successfully. Apr 23 23:15:42.018739 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 23 23:15:42.030285 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 23 23:15:42.030486 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 23 23:15:42.040547 extend-filesystems[1855]: Old size kept for /dev/sda9 Apr 23 23:15:42.050581 update_engine[1875]: I20260423 23:15:42.048094 1875 main.cc:92] Flatcar Update Engine starting Apr 23 23:15:42.042063 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 23 23:15:42.042742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 23 23:15:42.061825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 23 23:15:42.071027 jq[1891]: true Apr 23 23:15:42.076299 systemd-logind[1874]: New seat seat0. Apr 23 23:15:42.082071 systemd-logind[1874]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 23 23:15:42.082252 systemd[1]: Started systemd-logind.service - User Login Management. Apr 23 23:15:42.091458 (ntainerd)[1896]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 23 23:15:42.120173 tar[1887]: linux-arm64/LICENSE Apr 23 23:15:42.120418 tar[1887]: linux-arm64/helm Apr 23 23:15:42.175473 dbus-daemon[1849]: [system] SELinux support is enabled Apr 23 23:15:42.175669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 23 23:15:42.183608 update_engine[1875]: I20260423 23:15:42.182212 1875 update_check_scheduler.cc:74] Next update check in 10m54s Apr 23 23:15:42.185325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 23 23:15:42.185348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 23 23:15:42.188420 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 23 23:15:42.194706 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 23 23:15:42.194730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 23 23:15:42.203308 systemd[1]: Started update-engine.service - Update Engine. Apr 23 23:15:42.208115 sshd_keygen[1884]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 23 23:15:42.216592 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 23 23:15:42.234881 bash[1938]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:15:42.237529 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 23 23:15:42.249664 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 23 23:15:42.262593 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 23 23:15:42.270407 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 23 23:15:42.272741 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 23 23:15:42.306986 systemd[1]: issuegen.service: Deactivated successfully. Apr 23 23:15:42.307157 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 23 23:15:42.330463 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 23 23:15:42.349081 coreos-metadata[1848]: Apr 23 23:15:42.349 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 23 23:15:42.352601 coreos-metadata[1848]: Apr 23 23:15:42.352 INFO Fetch successful Apr 23 23:15:42.352887 coreos-metadata[1848]: Apr 23 23:15:42.352 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 23 23:15:42.357395 coreos-metadata[1848]: Apr 23 23:15:42.357 INFO Fetch successful Apr 23 23:15:42.357395 coreos-metadata[1848]: Apr 23 23:15:42.357 INFO Fetching http://168.63.129.16/machine/12e605e1-d8a6-41e3-ae31-51d83dc69fe7/f5d22eb4%2D4da9%2D435e%2D8a89%2D05381a758b45.%5Fci%2D4459.2.4%2Dn%2D357a044314?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 23 23:15:42.362415 coreos-metadata[1848]: Apr 23 23:15:42.360 INFO Fetch successful Apr 23 23:15:42.362415 coreos-metadata[1848]: Apr 23 23:15:42.360 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 23 23:15:42.365109 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 23 23:15:42.373564 coreos-metadata[1848]: Apr 23 23:15:42.373 INFO Fetch successful Apr 23 23:15:42.386109 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 23 23:15:42.401962 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 23 23:15:42.414452 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 23 23:15:42.425577 systemd[1]: Reached target getty.target - Login Prompts. Apr 23 23:15:42.445208 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 23 23:15:42.452270 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 23 23:15:42.458108 locksmithd[1953]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 23 23:15:42.581044 tar[1887]: linux-arm64/README.md Apr 23 23:15:42.595646 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 23 23:15:42.869574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:15:42.883977 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:15:42.974690 containerd[1896]: time="2026-04-23T23:15:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 23 23:15:42.975448 containerd[1896]: time="2026-04-23T23:15:42.975295748Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 23 23:15:42.981167 containerd[1896]: time="2026-04-23T23:15:42.981137452Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.824µs" Apr 23 23:15:42.981262 containerd[1896]: time="2026-04-23T23:15:42.981249156Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 23 23:15:42.981316 containerd[1896]: time="2026-04-23T23:15:42.981305780Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 23 23:15:42.981476 containerd[1896]: time="2026-04-23T23:15:42.981461292Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 23 23:15:42.981537 containerd[1896]: time="2026-04-23T23:15:42.981526636Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 23 23:15:42.981592 containerd[1896]: time="2026-04-23T23:15:42.981583532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:15:42.981695 containerd[1896]: time="2026-04-23T23:15:42.981668076Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:15:42.981756 containerd[1896]: time="2026-04-23T23:15:42.981742492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982003 containerd[1896]: time="2026-04-23T23:15:42.981984508Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982058 containerd[1896]: time="2026-04-23T23:15:42.982049276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982106 containerd[1896]: time="2026-04-23T23:15:42.982093036Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982144 containerd[1896]: time="2026-04-23T23:15:42.982134996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982268 containerd[1896]: time="2026-04-23T23:15:42.982255892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982495 containerd[1896]: time="2026-04-23T23:15:42.982477652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982574 containerd[1896]: time="2026-04-23T23:15:42.982562268Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:15:42.982609 containerd[1896]: time="2026-04-23T23:15:42.982601660Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 23 23:15:42.982667 containerd[1896]: time="2026-04-23T23:15:42.982658292Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 23 23:15:42.982902 containerd[1896]: time="2026-04-23T23:15:42.982885284Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 23 23:15:42.983013 containerd[1896]: time="2026-04-23T23:15:42.983000740Z" level=info msg="metadata content store policy set" policy=shared Apr 23 23:15:42.998652 containerd[1896]: time="2026-04-23T23:15:42.998622436Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 23 23:15:42.998845 containerd[1896]: time="2026-04-23T23:15:42.998793996Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 23 23:15:42.998845 containerd[1896]: time="2026-04-23T23:15:42.998811572Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 23 23:15:42.998845 containerd[1896]: time="2026-04-23T23:15:42.998820572Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 23 23:15:42.998845 containerd[1896]: time="2026-04-23T23:15:42.998828836Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.998836396Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.998979092Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.998992884Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999001732Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999010428Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999016436Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999025556Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999135652Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999150700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999160468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999167844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999174300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999181036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999190372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 23 23:15:42.999297 containerd[1896]: time="2026-04-23T23:15:42.999197220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 23 23:15:42.999502 containerd[1896]: time="2026-04-23T23:15:42.999205044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 23 23:15:42.999502 containerd[1896]: time="2026-04-23T23:15:42.999211444Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 23 23:15:42.999502 containerd[1896]: time="2026-04-23T23:15:42.999218028Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 23 23:15:42.999502 containerd[1896]: time="2026-04-23T23:15:42.999263356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 23 23:15:42.999502 containerd[1896]: time="2026-04-23T23:15:42.999274052Z" level=info msg="Start snapshots syncer" Apr 23 23:15:42.999591 containerd[1896]: time="2026-04-23T23:15:42.999579092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 23 23:15:42.999902 containerd[1896]: time="2026-04-23T23:15:42.999872372Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 23 23:15:43.000039 containerd[1896]: time="2026-04-23T23:15:43.000025404Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 23 23:15:43.000128 containerd[1896]: time="2026-04-23T23:15:43.000115716Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 23 23:15:43.000330 containerd[1896]: time="2026-04-23T23:15:43.000311956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 23 23:15:43.000399 containerd[1896]: time="2026-04-23T23:15:43.000389228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 23 23:15:43.000432 containerd[1896]: time="2026-04-23T23:15:43.000425436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 23 23:15:43.000464 containerd[1896]: time="2026-04-23T23:15:43.000456404Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 23 23:15:43.000504 containerd[1896]: time="2026-04-23T23:15:43.000495356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 23 23:15:43.000545 containerd[1896]: time="2026-04-23T23:15:43.000536996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 23 23:15:43.000583 containerd[1896]: time="2026-04-23T23:15:43.000575652Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 23 23:15:43.000629 containerd[1896]: time="2026-04-23T23:15:43.000620628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 23 23:15:43.000669 containerd[1896]: time="2026-04-23T23:15:43.000660428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 23 23:15:43.000724 containerd[1896]: time="2026-04-23T23:15:43.000713236Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 23 23:15:43.000854 containerd[1896]: time="2026-04-23T23:15:43.000799692Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:15:43.000854 containerd[1896]: time="2026-04-23T23:15:43.000818164Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:15:43.000854 containerd[1896]: time="2026-04-23T23:15:43.000825036Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:15:43.000854 containerd[1896]: time="2026-04-23T23:15:43.000831452Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:15:43.000854 containerd[1896]: time="2026-04-23T23:15:43.000836076Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.000842220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.000988660Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.001003908Z" level=info msg="runtime interface created" Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.001007988Z" level=info msg="created NRI interface" Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.001013132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.001021412Z" level=info msg="Connect containerd service" Apr 23 23:15:43.001056 containerd[1896]: time="2026-04-23T23:15:43.001039468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 23 23:15:43.002172 containerd[1896]: time="2026-04-23T23:15:43.001851396Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:15:43.260060 kubelet[2033]: E0423 23:15:43.259926 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:15:43.262105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:15:43.262214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:15:43.263765 systemd[1]: kubelet.service: Consumed 527ms CPU time, 256.2M memory peak. Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393655020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393733668Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393757316Z" level=info msg="Start subscribing containerd event" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393790020Z" level=info msg="Start recovering state" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393865532Z" level=info msg="Start event monitor" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393874804Z" level=info msg="Start cni network conf syncer for default" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393879108Z" level=info msg="Start streaming server" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393884828Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393889484Z" level=info msg="runtime interface starting up..." Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393893620Z" level=info msg="starting plugins..." Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.393903204Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 23 23:15:43.394114 containerd[1896]: time="2026-04-23T23:15:43.394004676Z" level=info msg="containerd successfully booted in 0.419888s" Apr 23 23:15:43.394131 systemd[1]: Started containerd.service - containerd container runtime. Apr 23 23:15:43.399970 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 23 23:15:43.408765 systemd[1]: Startup finished in 1.689s (kernel) + 13.592s (initrd) + 11.486s (userspace) = 26.768s. Apr 23 23:15:43.710016 login[2011]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:15:43.711834 login[2012]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:15:43.718926 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 23 23:15:43.719902 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 23 23:15:43.725603 systemd-logind[1874]: New session 2 of user core. Apr 23 23:15:43.728405 systemd-logind[1874]: New session 1 of user core. Apr 23 23:15:43.749132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 23 23:15:43.751965 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 23 23:15:43.766898 (systemd)[2060]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 23 23:15:43.768976 systemd-logind[1874]: New session c1 of user core. Apr 23 23:15:43.889566 systemd[2060]: Queued start job for default target default.target. Apr 23 23:15:43.896464 systemd[2060]: Created slice app.slice - User Application Slice. Apr 23 23:15:43.896705 systemd[2060]: Reached target paths.target - Paths. Apr 23 23:15:43.896752 systemd[2060]: Reached target timers.target - Timers. Apr 23 23:15:43.897877 systemd[2060]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 23 23:15:43.905919 systemd[2060]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 23 23:15:43.906087 systemd[2060]: Reached target sockets.target - Sockets. Apr 23 23:15:43.906184 systemd[2060]: Reached target basic.target - Basic System. Apr 23 23:15:43.906281 systemd[2060]: Reached target default.target - Main User Target. Apr 23 23:15:43.906302 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 23 23:15:43.906498 systemd[2060]: Startup finished in 132ms. Apr 23 23:15:43.907286 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 23 23:15:43.907794 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 23 23:15:44.387005 waagent[2007]: 2026-04-23T23:15:44.382773Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Apr 23 23:15:44.387316 waagent[2007]: 2026-04-23T23:15:44.387162Z INFO Daemon Daemon OS: flatcar 4459.2.4 Apr 23 23:15:44.390509 waagent[2007]: 2026-04-23T23:15:44.390474Z INFO Daemon Daemon Python: 3.11.13 Apr 23 23:15:44.395701 waagent[2007]: 2026-04-23T23:15:44.394751Z INFO Daemon Daemon Run daemon Apr 23 23:15:44.397855 waagent[2007]: 2026-04-23T23:15:44.397814Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Apr 23 23:15:44.404147 waagent[2007]: 2026-04-23T23:15:44.404111Z INFO Daemon Daemon Using waagent for provisioning Apr 23 23:15:44.407988 waagent[2007]: 2026-04-23T23:15:44.407950Z INFO Daemon Daemon Activate resource disk Apr 23 23:15:44.411960 waagent[2007]: 2026-04-23T23:15:44.411927Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 23 23:15:44.420325 waagent[2007]: 2026-04-23T23:15:44.420280Z INFO Daemon Daemon Found device: None Apr 23 23:15:44.423822 waagent[2007]: 2026-04-23T23:15:44.423721Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 23 23:15:44.430092 waagent[2007]: 2026-04-23T23:15:44.430057Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 23 23:15:44.438868 waagent[2007]: 2026-04-23T23:15:44.438826Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 23 23:15:44.443486 waagent[2007]: 2026-04-23T23:15:44.443454Z INFO Daemon Daemon Running default provisioning handler Apr 23 23:15:44.452846 waagent[2007]: 2026-04-23T23:15:44.452798Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 23 23:15:44.462893 waagent[2007]: 2026-04-23T23:15:44.462852Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 23 23:15:44.470308 waagent[2007]: 2026-04-23T23:15:44.470273Z INFO Daemon Daemon cloud-init is enabled: False Apr 23 23:15:44.474274 waagent[2007]: 2026-04-23T23:15:44.474238Z INFO Daemon Daemon Copying ovf-env.xml Apr 23 23:15:44.551637 waagent[2007]: 2026-04-23T23:15:44.551553Z INFO Daemon Daemon Successfully mounted dvd Apr 23 23:15:44.584931 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 23 23:15:44.586667 waagent[2007]: 2026-04-23T23:15:44.586571Z INFO Daemon Daemon Detect protocol endpoint Apr 23 23:15:44.590907 waagent[2007]: 2026-04-23T23:15:44.590860Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 23 23:15:44.595991 waagent[2007]: 2026-04-23T23:15:44.595956Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 23 23:15:44.601706 waagent[2007]: 2026-04-23T23:15:44.601664Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 23 23:15:44.606251 waagent[2007]: 2026-04-23T23:15:44.606217Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 23 23:15:44.611191 waagent[2007]: 2026-04-23T23:15:44.611159Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 23 23:15:44.653078 waagent[2007]: 2026-04-23T23:15:44.653002Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 23 23:15:44.658194 waagent[2007]: 2026-04-23T23:15:44.658170Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 23 23:15:44.662237 waagent[2007]: 2026-04-23T23:15:44.662208Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 23 23:15:44.773586 waagent[2007]: 2026-04-23T23:15:44.773497Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 23 23:15:44.778757 waagent[2007]: 2026-04-23T23:15:44.778710Z INFO Daemon Daemon Forcing an update of the goal state. Apr 23 23:15:44.786365 waagent[2007]: 2026-04-23T23:15:44.786328Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 23 23:15:44.805918 waagent[2007]: 2026-04-23T23:15:44.805884Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 23 23:15:44.810818 waagent[2007]: 2026-04-23T23:15:44.810786Z INFO Daemon Apr 23 23:15:44.813005 waagent[2007]: 2026-04-23T23:15:44.812975Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4e8574d2-83ff-4cf0-a1b4-3813f2c708e5 eTag: 3013227259352177607 source: Fabric] Apr 23 23:15:44.821996 waagent[2007]: 2026-04-23T23:15:44.821961Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 23 23:15:44.826959 waagent[2007]: 2026-04-23T23:15:44.826928Z INFO Daemon Apr 23 23:15:44.829122 waagent[2007]: 2026-04-23T23:15:44.829096Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 23 23:15:44.837273 waagent[2007]: 2026-04-23T23:15:44.837245Z INFO Daemon Daemon Downloading artifacts profile blob Apr 23 23:15:44.901636 waagent[2007]: 2026-04-23T23:15:44.901560Z INFO Daemon Downloaded certificate {'thumbprint': '1E7B25954F84085A2883DA0F6E023939C00B41F2', 'hasPrivateKey': True} Apr 23 23:15:44.909402 waagent[2007]: 2026-04-23T23:15:44.909332Z INFO Daemon Fetch goal state completed Apr 23 23:15:44.918717 waagent[2007]: 2026-04-23T23:15:44.918672Z INFO Daemon Daemon Starting provisioning Apr 23 23:15:44.922365 waagent[2007]: 2026-04-23T23:15:44.922333Z INFO Daemon Daemon Handle ovf-env.xml. Apr 23 23:15:44.925777 waagent[2007]: 2026-04-23T23:15:44.925750Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-357a044314] Apr 23 23:15:44.932228 waagent[2007]: 2026-04-23T23:15:44.932182Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-357a044314] Apr 23 23:15:44.936899 waagent[2007]: 2026-04-23T23:15:44.936861Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 23 23:15:44.941537 waagent[2007]: 2026-04-23T23:15:44.941505Z INFO Daemon Daemon Primary interface is [eth0] Apr 23 23:15:44.951392 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:15:44.951401 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:15:44.951456 systemd-networkd[1464]: eth0: DHCP lease lost Apr 23 23:15:44.952591 waagent[2007]: 2026-04-23T23:15:44.952542Z INFO Daemon Daemon Create user account if not exists Apr 23 23:15:44.956672 waagent[2007]: 2026-04-23T23:15:44.956638Z INFO Daemon Daemon User core already exists, skip useradd Apr 23 23:15:44.960749 waagent[2007]: 2026-04-23T23:15:44.960719Z INFO Daemon Daemon Configure sudoer Apr 23 23:15:44.964778 waagent[2007]: 2026-04-23T23:15:44.964743Z INFO Daemon Daemon Configure sshd Apr 23 23:15:44.968292 waagent[2007]: 2026-04-23T23:15:44.968249Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 23 23:15:44.977411 waagent[2007]: 2026-04-23T23:15:44.977377Z INFO Daemon Daemon Deploy ssh public key. Apr 23 23:15:44.997717 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 23 23:15:46.052956 waagent[2007]: 2026-04-23T23:15:46.052898Z INFO Daemon Daemon Provisioning complete Apr 23 23:15:46.067434 waagent[2007]: 2026-04-23T23:15:46.067395Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 23 23:15:46.073042 waagent[2007]: 2026-04-23T23:15:46.073004Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 23 23:15:46.081508 waagent[2007]: 2026-04-23T23:15:46.081476Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Apr 23 23:15:46.181712 waagent[2110]: 2026-04-23T23:15:46.180900Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Apr 23 23:15:46.181712 waagent[2110]: 2026-04-23T23:15:46.181028Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Apr 23 23:15:46.181712 waagent[2110]: 2026-04-23T23:15:46.181065Z INFO ExtHandler ExtHandler Python: 3.11.13 Apr 23 23:15:46.181712 waagent[2110]: 2026-04-23T23:15:46.181098Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 23 23:15:46.217141 waagent[2110]: 2026-04-23T23:15:46.217078Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Apr 23 23:15:46.217434 waagent[2110]: 2026-04-23T23:15:46.217405Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 23 23:15:46.217566 waagent[2110]: 2026-04-23T23:15:46.217542Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 23 23:15:46.222902 waagent[2110]: 2026-04-23T23:15:46.222856Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 23 23:15:46.227713 waagent[2110]: 2026-04-23T23:15:46.227664Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 23 23:15:46.228206 waagent[2110]: 2026-04-23T23:15:46.228175Z INFO ExtHandler Apr 23 23:15:46.228330 waagent[2110]: 2026-04-23T23:15:46.228306Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1f4955a2-3421-4284-8655-e6c796931d54 eTag: 3013227259352177607 source: Fabric] Apr 23 23:15:46.228623 waagent[2110]: 2026-04-23T23:15:46.228596Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 23 23:15:46.229165 waagent[2110]: 2026-04-23T23:15:46.229134Z INFO ExtHandler Apr 23 23:15:46.229287 waagent[2110]: 2026-04-23T23:15:46.229266Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 23 23:15:46.233411 waagent[2110]: 2026-04-23T23:15:46.233381Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 23 23:15:46.284924 waagent[2110]: 2026-04-23T23:15:46.284857Z INFO ExtHandler Downloaded certificate {'thumbprint': '1E7B25954F84085A2883DA0F6E023939C00B41F2', 'hasPrivateKey': True} Apr 23 23:15:46.285577 waagent[2110]: 2026-04-23T23:15:46.285539Z INFO ExtHandler Fetch goal state completed Apr 23 23:15:46.297663 waagent[2110]: 2026-04-23T23:15:46.297622Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Apr 23 23:15:46.301289 waagent[2110]: 2026-04-23T23:15:46.301245Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2110 Apr 23 23:15:46.301537 waagent[2110]: 2026-04-23T23:15:46.301479Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 23 23:15:46.301886 waagent[2110]: 2026-04-23T23:15:46.301855Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Apr 23 23:15:46.303125 waagent[2110]: 2026-04-23T23:15:46.303053Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Apr 23 23:15:46.303523 waagent[2110]: 2026-04-23T23:15:46.303486Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Apr 23 23:15:46.303743 waagent[2110]: 2026-04-23T23:15:46.303716Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Apr 23 23:15:46.304237 waagent[2110]: 2026-04-23T23:15:46.304203Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 23 23:15:46.364725 waagent[2110]: 2026-04-23T23:15:46.364671Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 23 23:15:46.365046 waagent[2110]: 2026-04-23T23:15:46.365015Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 23 23:15:46.369619 waagent[2110]: 2026-04-23T23:15:46.369592Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 23 23:15:46.374010 systemd[1]: Reload requested from client PID 2125 ('systemctl') (unit waagent.service)... Apr 23 23:15:46.374214 systemd[1]: Reloading... Apr 23 23:15:46.444725 zram_generator::config[2168]: No configuration found. Apr 23 23:15:46.600334 systemd[1]: Reloading finished in 225 ms. Apr 23 23:15:46.613706 waagent[2110]: 2026-04-23T23:15:46.611779Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 23 23:15:46.613706 waagent[2110]: 2026-04-23T23:15:46.611923Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 23 23:15:47.165262 waagent[2110]: 2026-04-23T23:15:47.165193Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 23 23:15:47.165538 waagent[2110]: 2026-04-23T23:15:47.165506Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Apr 23 23:15:47.166196 waagent[2110]: 2026-04-23T23:15:47.166153Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 23 23:15:47.166475 waagent[2110]: 2026-04-23T23:15:47.166437Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 23 23:15:47.167426 waagent[2110]: 2026-04-23T23:15:47.166818Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 23 23:15:47.167426 waagent[2110]: 2026-04-23T23:15:47.166879Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 23 23:15:47.167426 waagent[2110]: 2026-04-23T23:15:47.166984Z INFO EnvHandler ExtHandler Configure routes Apr 23 23:15:47.167426 waagent[2110]: 2026-04-23T23:15:47.167022Z INFO EnvHandler ExtHandler Gateway:None Apr 23 23:15:47.167426 waagent[2110]: 2026-04-23T23:15:47.167046Z INFO EnvHandler ExtHandler Routes:None Apr 23 23:15:47.167533 waagent[2110]: 2026-04-23T23:15:47.166662Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 23 23:15:47.167660 waagent[2110]: 2026-04-23T23:15:47.167634Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 23 23:15:47.167916 waagent[2110]: 2026-04-23T23:15:47.167886Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 23 23:15:47.168033 waagent[2110]: 2026-04-23T23:15:47.168007Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 23 23:15:47.168226 waagent[2110]: 2026-04-23T23:15:47.168202Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 23 23:15:47.168226 waagent[2110]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 23 23:15:47.168226 waagent[2110]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 23 23:15:47.168226 waagent[2110]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 23 23:15:47.168226 waagent[2110]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 23 23:15:47.168226 waagent[2110]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 23 23:15:47.168226 waagent[2110]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 23 23:15:47.168740 waagent[2110]: 2026-04-23T23:15:47.168710Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 23 23:15:47.169239 waagent[2110]: 2026-04-23T23:15:47.169198Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 23 23:15:47.169342 waagent[2110]: 2026-04-23T23:15:47.169309Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 23 23:15:47.170581 waagent[2110]: 2026-04-23T23:15:47.170536Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 23 23:15:47.176105 waagent[2110]: 2026-04-23T23:15:47.175886Z INFO ExtHandler ExtHandler Apr 23 23:15:47.176105 waagent[2110]: 2026-04-23T23:15:47.175946Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9cd94147-3787-4a7f-b330-9ad18fb3a8c7 correlation 98662d35-e80c-4d8a-8c80-768ef2d4d5fb created: 2026-04-23T23:14:47.281357Z] Apr 23 23:15:47.177315 waagent[2110]: 2026-04-23T23:15:47.177269Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 23 23:15:47.178419 waagent[2110]: 2026-04-23T23:15:47.178380Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 23 23:15:47.207887 waagent[2110]: 2026-04-23T23:15:47.207845Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Apr 23 23:15:47.207887 waagent[2110]: Try `iptables -h' or 'iptables --help' for more information.) Apr 23 23:15:47.208579 waagent[2110]: 2026-04-23T23:15:47.208550Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 486823B0-9350-4162-B9B2-1F8FD88F7A99;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Apr 23 23:15:47.235245 waagent[2110]: 2026-04-23T23:15:47.235189Z INFO MonitorHandler ExtHandler Network interfaces: Apr 23 23:15:47.235245 waagent[2110]: Executing ['ip', '-a', '-o', 'link']: Apr 23 23:15:47.235245 waagent[2110]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 23 23:15:47.235245 waagent[2110]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b3:9b:62 brd ff:ff:ff:ff:ff:ff Apr 23 23:15:47.235245 waagent[2110]: 3: enP32372s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b3:9b:62 brd ff:ff:ff:ff:ff:ff\ altname enP32372p0s2 Apr 23 23:15:47.235245 waagent[2110]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 23 23:15:47.235245 waagent[2110]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 23 23:15:47.235245 waagent[2110]: 2: eth0 inet 10.0.0.29/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 23 23:15:47.235245 waagent[2110]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 23 23:15:47.235245 waagent[2110]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 23 23:15:47.235245 waagent[2110]: 2: eth0 inet6 fe80::222:48ff:feb3:9b62/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 23 23:15:47.276759 waagent[2110]: 2026-04-23T23:15:47.276706Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Apr 23 23:15:47.276759 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 23 23:15:47.276759 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.276759 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 23 23:15:47.276759 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.276759 waagent[2110]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 23 23:15:47.276759 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.276759 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 23 23:15:47.276759 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 23 23:15:47.276759 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 23 23:15:47.279748 waagent[2110]: 2026-04-23T23:15:47.279461Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 23 23:15:47.279748 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 23 23:15:47.279748 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.279748 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 23 23:15:47.279748 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.279748 waagent[2110]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Apr 23 23:15:47.279748 waagent[2110]: pkts bytes target prot opt in out source destination Apr 23 23:15:47.279748 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 23 23:15:47.279748 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 23 23:15:47.279748 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 23 23:15:47.279748 waagent[2110]: 2026-04-23T23:15:47.279694Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 23 23:15:53.512873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 23 23:15:53.514529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:15:53.625867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:15:53.631992 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:15:53.714359 kubelet[2259]: E0423 23:15:53.714292 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:15:53.717254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:15:53.717367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:15:53.717670 systemd[1]: kubelet.service: Consumed 164ms CPU time, 104.5M memory peak. Apr 23 23:16:03.794854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 23 23:16:03.796121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:04.125487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:04.135059 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:16:04.162822 kubelet[2273]: E0423 23:16:04.162778 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:16:04.165124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:16:04.165237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:16:04.165777 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.4M memory peak. Apr 23 23:16:05.454850 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 23 23:16:05.456050 systemd[1]: Started sshd@0-10.0.0.29:22-50.85.169.122:42104.service - OpenSSH per-connection server daemon (50.85.169.122:42104). Apr 23 23:16:05.769237 chronyd[1846]: Selected source PHC0 Apr 23 23:16:06.384934 sshd[2282]: Accepted publickey for core from 50.85.169.122 port 42104 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:06.386027 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:06.390275 systemd-logind[1874]: New session 3 of user core. Apr 23 23:16:06.396897 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 23 23:16:06.992208 systemd[1]: Started sshd@1-10.0.0.29:22-50.85.169.122:42112.service - OpenSSH per-connection server daemon (50.85.169.122:42112). Apr 23 23:16:07.762715 sshd[2288]: Accepted publickey for core from 50.85.169.122 port 42112 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:07.763725 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:07.767162 systemd-logind[1874]: New session 4 of user core. Apr 23 23:16:07.776805 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 23 23:16:08.208210 sshd[2291]: Connection closed by 50.85.169.122 port 42112 Apr 23 23:16:08.208808 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:08.212355 systemd[1]: sshd@1-10.0.0.29:22-50.85.169.122:42112.service: Deactivated successfully. Apr 23 23:16:08.214086 systemd[1]: session-4.scope: Deactivated successfully. Apr 23 23:16:08.215326 systemd-logind[1874]: Session 4 logged out. Waiting for processes to exit. Apr 23 23:16:08.216337 systemd-logind[1874]: Removed session 4. Apr 23 23:16:08.355827 systemd[1]: Started sshd@2-10.0.0.29:22-50.85.169.122:42118.service - OpenSSH per-connection server daemon (50.85.169.122:42118). Apr 23 23:16:09.106708 sshd[2297]: Accepted publickey for core from 50.85.169.122 port 42118 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:09.107751 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:09.111307 systemd-logind[1874]: New session 5 of user core. Apr 23 23:16:09.117979 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 23 23:16:09.535524 sshd[2300]: Connection closed by 50.85.169.122 port 42118 Apr 23 23:16:09.536104 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:09.540003 systemd[1]: sshd@2-10.0.0.29:22-50.85.169.122:42118.service: Deactivated successfully. Apr 23 23:16:09.541402 systemd[1]: session-5.scope: Deactivated successfully. Apr 23 23:16:09.542162 systemd-logind[1874]: Session 5 logged out. Waiting for processes to exit. Apr 23 23:16:09.543136 systemd-logind[1874]: Removed session 5. Apr 23 23:16:09.693231 systemd[1]: Started sshd@3-10.0.0.29:22-50.85.169.122:50570.service - OpenSSH per-connection server daemon (50.85.169.122:50570). Apr 23 23:16:10.440751 sshd[2306]: Accepted publickey for core from 50.85.169.122 port 50570 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:10.441906 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:10.445351 systemd-logind[1874]: New session 6 of user core. Apr 23 23:16:10.455998 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 23 23:16:10.872016 sshd[2309]: Connection closed by 50.85.169.122 port 50570 Apr 23 23:16:10.872565 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:10.875997 systemd[1]: sshd@3-10.0.0.29:22-50.85.169.122:50570.service: Deactivated successfully. Apr 23 23:16:10.877633 systemd[1]: session-6.scope: Deactivated successfully. Apr 23 23:16:10.879223 systemd-logind[1874]: Session 6 logged out. Waiting for processes to exit. Apr 23 23:16:10.880456 systemd-logind[1874]: Removed session 6. Apr 23 23:16:11.026181 systemd[1]: Started sshd@4-10.0.0.29:22-50.85.169.122:50582.service - OpenSSH per-connection server daemon (50.85.169.122:50582). Apr 23 23:16:11.775712 sshd[2315]: Accepted publickey for core from 50.85.169.122 port 50582 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:11.776443 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:11.780289 systemd-logind[1874]: New session 7 of user core. Apr 23 23:16:11.786802 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 23 23:16:12.195555 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 23 23:16:12.195808 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:16:12.225245 sudo[2319]: pam_unix(sudo:session): session closed for user root Apr 23 23:16:12.368693 sshd[2318]: Connection closed by 50.85.169.122 port 50582 Apr 23 23:16:12.369406 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:12.373249 systemd[1]: sshd@4-10.0.0.29:22-50.85.169.122:50582.service: Deactivated successfully. Apr 23 23:16:12.374575 systemd[1]: session-7.scope: Deactivated successfully. Apr 23 23:16:12.375809 systemd-logind[1874]: Session 7 logged out. Waiting for processes to exit. Apr 23 23:16:12.376775 systemd-logind[1874]: Removed session 7. Apr 23 23:16:12.517492 systemd[1]: Started sshd@5-10.0.0.29:22-50.85.169.122:50592.service - OpenSSH per-connection server daemon (50.85.169.122:50592). Apr 23 23:16:13.262997 sshd[2325]: Accepted publickey for core from 50.85.169.122 port 50592 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:13.264110 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:13.268003 systemd-logind[1874]: New session 8 of user core. Apr 23 23:16:13.273804 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 23 23:16:13.551387 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 23 23:16:13.551601 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:16:13.554393 sudo[2330]: pam_unix(sudo:session): session closed for user root Apr 23 23:16:13.557998 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 23 23:16:13.558191 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:16:13.565611 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:16:13.591264 augenrules[2352]: No rules Apr 23 23:16:13.592375 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:16:13.592535 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:16:13.595086 sudo[2329]: pam_unix(sudo:session): session closed for user root Apr 23 23:16:13.738066 sshd[2328]: Connection closed by 50.85.169.122 port 50592 Apr 23 23:16:13.737970 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:13.741169 systemd[1]: sshd@5-10.0.0.29:22-50.85.169.122:50592.service: Deactivated successfully. Apr 23 23:16:13.742962 systemd[1]: session-8.scope: Deactivated successfully. Apr 23 23:16:13.744811 systemd-logind[1874]: Session 8 logged out. Waiting for processes to exit. Apr 23 23:16:13.745599 systemd-logind[1874]: Removed session 8. Apr 23 23:16:13.896055 systemd[1]: Started sshd@6-10.0.0.29:22-50.85.169.122:50596.service - OpenSSH per-connection server daemon (50.85.169.122:50596). Apr 23 23:16:14.292098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 23 23:16:14.294093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:14.702770 sshd[2361]: Accepted publickey for core from 50.85.169.122 port 50596 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:16:14.703400 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:14.707907 systemd-logind[1874]: New session 9 of user core. Apr 23 23:16:14.712818 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 23:16:14.727473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:14.735950 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:16:14.760707 kubelet[2373]: E0423 23:16:14.760623 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:16:14.762905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:16:14.763126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:16:14.763702 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.1M memory peak. Apr 23 23:16:14.949962 sudo[2379]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 23 23:16:14.950165 sudo[2379]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:16:16.477590 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 23 23:16:16.487976 (dockerd)[2396]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 23 23:16:17.655449 dockerd[2396]: time="2026-04-23T23:16:17.655385911Z" level=info msg="Starting up" Apr 23 23:16:17.656106 dockerd[2396]: time="2026-04-23T23:16:17.656079494Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 23 23:16:17.664089 dockerd[2396]: time="2026-04-23T23:16:17.664055428Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 23 23:16:17.775574 systemd[1]: var-lib-docker-metacopy\x2dcheck2565877617-merged.mount: Deactivated successfully. Apr 23 23:16:17.792483 dockerd[2396]: time="2026-04-23T23:16:17.792443740Z" level=info msg="Loading containers: start." Apr 23 23:16:17.832711 kernel: Initializing XFRM netlink socket Apr 23 23:16:18.170492 systemd-networkd[1464]: docker0: Link UP Apr 23 23:16:18.186508 dockerd[2396]: time="2026-04-23T23:16:18.186418917Z" level=info msg="Loading containers: done." Apr 23 23:16:18.196553 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck225781177-merged.mount: Deactivated successfully. Apr 23 23:16:18.206722 dockerd[2396]: time="2026-04-23T23:16:18.206487920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 23 23:16:18.206722 dockerd[2396]: time="2026-04-23T23:16:18.206575235Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 23 23:16:18.206722 dockerd[2396]: time="2026-04-23T23:16:18.206655629Z" level=info msg="Initializing buildkit" Apr 23 23:16:18.251602 dockerd[2396]: time="2026-04-23T23:16:18.251552768Z" level=info msg="Completed buildkit initialization" Apr 23 23:16:18.257161 dockerd[2396]: time="2026-04-23T23:16:18.257115494Z" level=info msg="Daemon has completed initialization" Apr 23 23:16:18.257253 dockerd[2396]: time="2026-04-23T23:16:18.257170544Z" level=info msg="API listen on /run/docker.sock" Apr 23 23:16:18.257509 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 23 23:16:18.588868 containerd[1896]: time="2026-04-23T23:16:18.588830275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 23 23:16:19.380614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299422771.mount: Deactivated successfully. Apr 23 23:16:20.850669 containerd[1896]: time="2026-04-23T23:16:20.850064735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:20.855541 containerd[1896]: time="2026-04-23T23:16:20.855513354Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 23 23:16:20.859517 containerd[1896]: time="2026-04-23T23:16:20.859493557Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:20.865948 containerd[1896]: time="2026-04-23T23:16:20.865920096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:20.866387 containerd[1896]: time="2026-04-23T23:16:20.866359134Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.277494706s" Apr 23 23:16:20.866434 containerd[1896]: time="2026-04-23T23:16:20.866391623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 23 23:16:20.867111 containerd[1896]: time="2026-04-23T23:16:20.866931385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 23 23:16:22.678086 containerd[1896]: time="2026-04-23T23:16:22.678036341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:22.680783 containerd[1896]: time="2026-04-23T23:16:22.680736582Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 23 23:16:22.685695 containerd[1896]: time="2026-04-23T23:16:22.685128827Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:22.689097 containerd[1896]: time="2026-04-23T23:16:22.689057475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:22.689742 containerd[1896]: time="2026-04-23T23:16:22.689498934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.82254326s" Apr 23 23:16:22.689742 containerd[1896]: time="2026-04-23T23:16:22.689529416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 23 23:16:22.690313 containerd[1896]: time="2026-04-23T23:16:22.690288298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 23 23:16:24.182500 containerd[1896]: time="2026-04-23T23:16:24.182437307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:24.185981 containerd[1896]: time="2026-04-23T23:16:24.185950681Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 23 23:16:24.188639 containerd[1896]: time="2026-04-23T23:16:24.188612240Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:24.195197 containerd[1896]: time="2026-04-23T23:16:24.195169629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:24.196174 containerd[1896]: time="2026-04-23T23:16:24.196149769Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.505832918s" Apr 23 23:16:24.196200 containerd[1896]: time="2026-04-23T23:16:24.196180691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 23 23:16:24.196776 containerd[1896]: time="2026-04-23T23:16:24.196757549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 23 23:16:24.791924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 23 23:16:24.794137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:24.897886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:24.906925 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:16:25.003362 kubelet[2678]: E0423 23:16:25.002558 2678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:16:25.006164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:16:25.006275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:16:25.006748 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.6M memory peak. Apr 23 23:16:25.267663 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 23 23:16:25.577366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014664215.mount: Deactivated successfully. Apr 23 23:16:25.845450 containerd[1896]: time="2026-04-23T23:16:25.844749681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:25.848826 containerd[1896]: time="2026-04-23T23:16:25.848718059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 23 23:16:25.852146 containerd[1896]: time="2026-04-23T23:16:25.852120267Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:25.856346 containerd[1896]: time="2026-04-23T23:16:25.856319679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:25.856914 containerd[1896]: time="2026-04-23T23:16:25.856712841Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.659931828s" Apr 23 23:16:25.856914 containerd[1896]: time="2026-04-23T23:16:25.856839583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 23 23:16:25.857520 containerd[1896]: time="2026-04-23T23:16:25.857371446Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 23 23:16:26.551848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710659244.mount: Deactivated successfully. Apr 23 23:16:27.647512 update_engine[1875]: I20260423 23:16:27.647435 1875 update_attempter.cc:509] Updating boot flags... Apr 23 23:16:28.172940 containerd[1896]: time="2026-04-23T23:16:28.172877097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:28.176687 containerd[1896]: time="2026-04-23T23:16:28.176649050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 23 23:16:28.179382 containerd[1896]: time="2026-04-23T23:16:28.179355395Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:28.184111 containerd[1896]: time="2026-04-23T23:16:28.184079782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:28.184902 containerd[1896]: time="2026-04-23T23:16:28.184696650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.327284034s" Apr 23 23:16:28.184902 containerd[1896]: time="2026-04-23T23:16:28.184723171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 23 23:16:28.185270 containerd[1896]: time="2026-04-23T23:16:28.185103276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 23 23:16:28.780641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028707626.mount: Deactivated successfully. Apr 23 23:16:28.801952 containerd[1896]: time="2026-04-23T23:16:28.801751149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:16:28.810430 containerd[1896]: time="2026-04-23T23:16:28.810372609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 23 23:16:28.813718 containerd[1896]: time="2026-04-23T23:16:28.813658220Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:16:28.817323 containerd[1896]: time="2026-04-23T23:16:28.817281243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:16:28.817791 containerd[1896]: time="2026-04-23T23:16:28.817567741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 632.444887ms" Apr 23 23:16:28.817791 containerd[1896]: time="2026-04-23T23:16:28.817597502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 23 23:16:28.818018 containerd[1896]: time="2026-04-23T23:16:28.817985379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 23 23:16:29.504826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722694494.mount: Deactivated successfully. Apr 23 23:16:30.869724 containerd[1896]: time="2026-04-23T23:16:30.869410218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:30.872901 containerd[1896]: time="2026-04-23T23:16:30.872873164Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 23 23:16:30.875959 containerd[1896]: time="2026-04-23T23:16:30.875931977Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:30.881167 containerd[1896]: time="2026-04-23T23:16:30.881135309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:30.881843 containerd[1896]: time="2026-04-23T23:16:30.881723360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.063710412s" Apr 23 23:16:30.881843 containerd[1896]: time="2026-04-23T23:16:30.881758953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 23 23:16:33.217733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:33.218178 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.6M memory peak. Apr 23 23:16:33.219966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:33.243365 systemd[1]: Reload requested from client PID 2903 ('systemctl') (unit session-9.scope)... Apr 23 23:16:33.243500 systemd[1]: Reloading... Apr 23 23:16:33.335969 zram_generator::config[2959]: No configuration found. Apr 23 23:16:33.480821 systemd[1]: Reloading finished in 236 ms. Apr 23 23:16:33.517660 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 23 23:16:33.517763 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 23 23:16:33.517991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:33.518053 systemd[1]: kubelet.service: Consumed 69ms CPU time, 93.9M memory peak. Apr 23 23:16:33.519425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:33.727852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:33.737913 (kubelet)[3014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:16:33.873250 kubelet[3014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:16:33.874705 kubelet[3014]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:16:33.874705 kubelet[3014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:16:33.874705 kubelet[3014]: I0423 23:16:33.873643 3014 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:16:34.100116 kubelet[3014]: I0423 23:16:34.100079 3014 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 23 23:16:34.100116 kubelet[3014]: I0423 23:16:34.100108 3014 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:16:34.100323 kubelet[3014]: I0423 23:16:34.100304 3014 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:16:34.120886 kubelet[3014]: E0423 23:16:34.120727 3014 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 23 23:16:34.121965 kubelet[3014]: I0423 23:16:34.121940 3014 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:16:34.127868 kubelet[3014]: I0423 23:16:34.127843 3014 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:16:34.130918 kubelet[3014]: I0423 23:16:34.130896 3014 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 23 23:16:34.131916 kubelet[3014]: I0423 23:16:34.131877 3014 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:16:34.132033 kubelet[3014]: I0423 23:16:34.131917 3014 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-357a044314","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:16:34.132112 kubelet[3014]: I0423 23:16:34.132034 3014 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:16:34.132112 kubelet[3014]: I0423 23:16:34.132042 3014 container_manager_linux.go:303] "Creating device plugin manager" Apr 23 23:16:34.132773 kubelet[3014]: I0423 23:16:34.132755 3014 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:16:34.135187 kubelet[3014]: I0423 23:16:34.135168 3014 kubelet.go:480] "Attempting to sync node with API server" Apr 23 23:16:34.135215 kubelet[3014]: I0423 23:16:34.135208 3014 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:16:34.135236 kubelet[3014]: I0423 23:16:34.135230 3014 kubelet.go:386] "Adding apiserver pod source" Apr 23 23:16:34.136714 kubelet[3014]: I0423 23:16:34.136408 3014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:16:34.140242 kubelet[3014]: E0423 23:16:34.140213 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-357a044314&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:16:34.142426 kubelet[3014]: E0423 23:16:34.141830 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:16:34.142426 kubelet[3014]: I0423 23:16:34.141947 3014 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:16:34.142426 kubelet[3014]: I0423 23:16:34.142327 3014 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:16:34.142426 kubelet[3014]: W0423 23:16:34.142371 3014 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 23 23:16:34.144494 kubelet[3014]: I0423 23:16:34.144475 3014 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 23:16:34.144630 kubelet[3014]: I0423 23:16:34.144620 3014 server.go:1289] "Started kubelet" Apr 23 23:16:34.145349 kubelet[3014]: I0423 23:16:34.145306 3014 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:16:34.147267 kubelet[3014]: I0423 23:16:34.146286 3014 server.go:317] "Adding debug handlers to kubelet server" Apr 23 23:16:34.148889 kubelet[3014]: I0423 23:16:34.148843 3014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:16:34.149214 kubelet[3014]: I0423 23:16:34.149200 3014 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:16:34.150672 kubelet[3014]: E0423 23:16:34.149846 3014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-357a044314.18a91f8758c31ab7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-357a044314,UID:ci-4459.2.4-n-357a044314,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-357a044314,},FirstTimestamp:2026-04-23 23:16:34.144598711 +0000 UTC m=+0.403318083,LastTimestamp:2026-04-23 23:16:34.144598711 +0000 UTC m=+0.403318083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-357a044314,}" Apr 23 23:16:34.153584 kubelet[3014]: I0423 23:16:34.153557 3014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:16:34.156112 kubelet[3014]: E0423 23:16:34.153794 3014 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:16:34.156112 kubelet[3014]: I0423 23:16:34.153896 3014 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:16:34.156112 kubelet[3014]: E0423 23:16:34.154916 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-357a044314\" not found" Apr 23 23:16:34.156112 kubelet[3014]: I0423 23:16:34.154942 3014 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 23:16:34.156112 kubelet[3014]: I0423 23:16:34.155099 3014 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 23:16:34.156112 kubelet[3014]: I0423 23:16:34.155141 3014 reconciler.go:26] "Reconciler: start to sync state" Apr 23 23:16:34.156112 kubelet[3014]: E0423 23:16:34.155403 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:16:34.156112 kubelet[3014]: E0423 23:16:34.155572 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-357a044314?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="200ms" Apr 23 23:16:34.156475 kubelet[3014]: I0423 23:16:34.156458 3014 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:16:34.156612 kubelet[3014]: I0423 23:16:34.156597 3014 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:16:34.158025 kubelet[3014]: I0423 23:16:34.158011 3014 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:16:34.183763 kubelet[3014]: I0423 23:16:34.183744 3014 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:16:34.183946 kubelet[3014]: I0423 23:16:34.183937 3014 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:16:34.184007 kubelet[3014]: I0423 23:16:34.184000 3014 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:16:34.255472 kubelet[3014]: E0423 23:16:34.255428 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-357a044314\" not found" Apr 23 23:16:34.258518 kubelet[3014]: I0423 23:16:34.258503 3014 policy_none.go:49] "None policy: Start" Apr 23 23:16:34.258662 kubelet[3014]: I0423 23:16:34.258597 3014 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 23:16:34.258662 kubelet[3014]: I0423 23:16:34.258614 3014 state_mem.go:35] "Initializing new in-memory state store" Apr 23 23:16:34.268934 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 23 23:16:34.277220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 23 23:16:34.280347 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 23 23:16:34.288712 kubelet[3014]: E0423 23:16:34.288673 3014 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:16:34.288871 kubelet[3014]: I0423 23:16:34.288851 3014 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:16:34.288893 kubelet[3014]: I0423 23:16:34.288870 3014 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:16:34.289848 kubelet[3014]: I0423 23:16:34.289331 3014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:16:34.292321 kubelet[3014]: E0423 23:16:34.291876 3014 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:16:34.292321 kubelet[3014]: E0423 23:16:34.291937 3014 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-357a044314\" not found" Apr 23 23:16:34.294001 kubelet[3014]: I0423 23:16:34.293969 3014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 23:16:34.295343 kubelet[3014]: I0423 23:16:34.295312 3014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 23:16:34.295343 kubelet[3014]: I0423 23:16:34.295343 3014 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 23:16:34.295417 kubelet[3014]: I0423 23:16:34.295360 3014 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:16:34.295417 kubelet[3014]: I0423 23:16:34.295364 3014 kubelet.go:2436] "Starting kubelet main sync loop" Apr 23 23:16:34.295417 kubelet[3014]: E0423 23:16:34.295397 3014 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 23:16:34.296192 kubelet[3014]: E0423 23:16:34.296156 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 23:16:34.356635 kubelet[3014]: E0423 23:16:34.356504 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-357a044314?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="400ms" Apr 23 23:16:34.390828 kubelet[3014]: I0423 23:16:34.390759 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.391164 kubelet[3014]: E0423 23:16:34.391134 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.407896 systemd[1]: Created slice kubepods-burstable-poda5ea112e66090ffabe54a050ff14dedb.slice - libcontainer container kubepods-burstable-poda5ea112e66090ffabe54a050ff14dedb.slice. Apr 23 23:16:34.418589 kubelet[3014]: E0423 23:16:34.418562 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.421751 systemd[1]: Created slice kubepods-burstable-pod0b8e35ede8507a6e85301a9c05532123.slice - libcontainer container kubepods-burstable-pod0b8e35ede8507a6e85301a9c05532123.slice. Apr 23 23:16:34.424069 kubelet[3014]: E0423 23:16:34.424048 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.425549 systemd[1]: Created slice kubepods-burstable-pod6ca18e2b052f1671b0f07d80c6ebf7e4.slice - libcontainer container kubepods-burstable-pod6ca18e2b052f1671b0f07d80c6ebf7e4.slice. Apr 23 23:16:34.427246 kubelet[3014]: E0423 23:16:34.427224 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456791 kubelet[3014]: I0423 23:16:34.456749 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456857 kubelet[3014]: I0423 23:16:34.456801 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456857 kubelet[3014]: I0423 23:16:34.456814 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456857 kubelet[3014]: I0423 23:16:34.456829 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456857 kubelet[3014]: I0423 23:16:34.456838 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456857 kubelet[3014]: I0423 23:16:34.456846 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456964 kubelet[3014]: I0423 23:16:34.456856 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456964 kubelet[3014]: I0423 23:16:34.456876 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.456964 kubelet[3014]: I0423 23:16:34.456888 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ca18e2b052f1671b0f07d80c6ebf7e4-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-357a044314\" (UID: \"6ca18e2b052f1671b0f07d80c6ebf7e4\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:34.593338 kubelet[3014]: I0423 23:16:34.593029 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.593338 kubelet[3014]: E0423 23:16:34.593312 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.719753 containerd[1896]: time="2026-04-23T23:16:34.719629874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-357a044314,Uid:a5ea112e66090ffabe54a050ff14dedb,Namespace:kube-system,Attempt:0,}" Apr 23 23:16:34.725189 containerd[1896]: time="2026-04-23T23:16:34.725156288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-357a044314,Uid:0b8e35ede8507a6e85301a9c05532123,Namespace:kube-system,Attempt:0,}" Apr 23 23:16:34.727873 containerd[1896]: time="2026-04-23T23:16:34.727844817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-357a044314,Uid:6ca18e2b052f1671b0f07d80c6ebf7e4,Namespace:kube-system,Attempt:0,}" Apr 23 23:16:34.757552 kubelet[3014]: E0423 23:16:34.757517 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-357a044314?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="800ms" Apr 23 23:16:34.782349 containerd[1896]: time="2026-04-23T23:16:34.782308046Z" level=info msg="connecting to shim 55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb" address="unix:///run/containerd/s/cd8e94a959d6c497c7d18335fdb12d59cc85071e6d7dcc6def854746ea246db8" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:34.797409 containerd[1896]: time="2026-04-23T23:16:34.797286252Z" level=info msg="connecting to shim 8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7" address="unix:///run/containerd/s/0a83a0941dd0f0bbeef9d829108b0dcc908e4ca5c915509e50ecd5a66a5844b2" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:34.807895 systemd[1]: Started cri-containerd-55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb.scope - libcontainer container 55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb. Apr 23 23:16:34.820473 containerd[1896]: time="2026-04-23T23:16:34.820440224Z" level=info msg="connecting to shim cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1" address="unix:///run/containerd/s/30aeb890f636ae5f548dee48b3b84a7d67beb8243cd3bfda08b062aa3dc472c7" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:34.831831 systemd[1]: Started cri-containerd-8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7.scope - libcontainer container 8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7. Apr 23 23:16:34.856816 systemd[1]: Started cri-containerd-cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1.scope - libcontainer container cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1. Apr 23 23:16:34.864728 containerd[1896]: time="2026-04-23T23:16:34.864694012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-357a044314,Uid:a5ea112e66090ffabe54a050ff14dedb,Namespace:kube-system,Attempt:0,} returns sandbox id \"55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb\"" Apr 23 23:16:34.877675 containerd[1896]: time="2026-04-23T23:16:34.877384158Z" level=info msg="CreateContainer within sandbox \"55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 23 23:16:34.889322 containerd[1896]: time="2026-04-23T23:16:34.889290343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-357a044314,Uid:0b8e35ede8507a6e85301a9c05532123,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7\"" Apr 23 23:16:34.899575 containerd[1896]: time="2026-04-23T23:16:34.899548386Z" level=info msg="CreateContainer within sandbox \"8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 23 23:16:34.903461 containerd[1896]: time="2026-04-23T23:16:34.903427522Z" level=info msg="Container ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:34.916277 containerd[1896]: time="2026-04-23T23:16:34.916244504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-357a044314,Uid:6ca18e2b052f1671b0f07d80c6ebf7e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1\"" Apr 23 23:16:34.923155 containerd[1896]: time="2026-04-23T23:16:34.923112003Z" level=info msg="Container 348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:34.924502 containerd[1896]: time="2026-04-23T23:16:34.924473392Z" level=info msg="CreateContainer within sandbox \"cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 23 23:16:34.935401 containerd[1896]: time="2026-04-23T23:16:34.935365983Z" level=info msg="CreateContainer within sandbox \"55ca32c7e5e05e4e9bad29ebe25ac570e0dc20793c7f71d3080ac3158444eceb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb\"" Apr 23 23:16:34.936089 containerd[1896]: time="2026-04-23T23:16:34.936062462Z" level=info msg="StartContainer for \"ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb\"" Apr 23 23:16:34.937210 containerd[1896]: time="2026-04-23T23:16:34.937138074Z" level=info msg="connecting to shim ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb" address="unix:///run/containerd/s/cd8e94a959d6c497c7d18335fdb12d59cc85071e6d7dcc6def854746ea246db8" protocol=ttrpc version=3 Apr 23 23:16:34.955956 systemd[1]: Started cri-containerd-ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb.scope - libcontainer container ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb. Apr 23 23:16:34.972241 containerd[1896]: time="2026-04-23T23:16:34.972139220Z" level=info msg="CreateContainer within sandbox \"8d0b770e834ca0f6d8452e94c76b4936eda001a57f1c0a7b4aea20ccb30377f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c\"" Apr 23 23:16:34.973471 containerd[1896]: time="2026-04-23T23:16:34.973165134Z" level=info msg="StartContainer for \"348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c\"" Apr 23 23:16:34.976062 containerd[1896]: time="2026-04-23T23:16:34.976033581Z" level=info msg="connecting to shim 348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c" address="unix:///run/containerd/s/0a83a0941dd0f0bbeef9d829108b0dcc908e4ca5c915509e50ecd5a66a5844b2" protocol=ttrpc version=3 Apr 23 23:16:34.978835 containerd[1896]: time="2026-04-23T23:16:34.978799992Z" level=info msg="Container 4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:34.995029 kubelet[3014]: I0423 23:16:34.994996 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.995370 kubelet[3014]: E0423 23:16:34.995340 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:34.996922 systemd[1]: Started cri-containerd-348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c.scope - libcontainer container 348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c. Apr 23 23:16:35.000538 containerd[1896]: time="2026-04-23T23:16:35.000399401Z" level=info msg="CreateContainer within sandbox \"cc78647b169670fa2c96c7dbbcd61d735f924b222916c6b5a481432305dc47f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d\"" Apr 23 23:16:35.001056 containerd[1896]: time="2026-04-23T23:16:35.000961299Z" level=info msg="StartContainer for \"4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d\"" Apr 23 23:16:35.003026 containerd[1896]: time="2026-04-23T23:16:35.002793448Z" level=info msg="connecting to shim 4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d" address="unix:///run/containerd/s/30aeb890f636ae5f548dee48b3b84a7d67beb8243cd3bfda08b062aa3dc472c7" protocol=ttrpc version=3 Apr 23 23:16:35.019651 containerd[1896]: time="2026-04-23T23:16:35.019539112Z" level=info msg="StartContainer for \"ade9e16d38346e46d5d694ed5324178e33420db20ca2adb7a605f9f359c747bb\" returns successfully" Apr 23 23:16:35.030956 systemd[1]: Started cri-containerd-4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d.scope - libcontainer container 4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d. Apr 23 23:16:35.044407 containerd[1896]: time="2026-04-23T23:16:35.044370267Z" level=info msg="StartContainer for \"348e38199a1f26cac5237dae8771cefa504577f3fb8e6c0c0f2b26270842e34c\" returns successfully" Apr 23 23:16:35.084074 containerd[1896]: time="2026-04-23T23:16:35.084034312Z" level=info msg="StartContainer for \"4caad2cd6f686f15c3c295d143089cddf4b811e9746f71af3adbad4a059ac40d\" returns successfully" Apr 23 23:16:35.305000 kubelet[3014]: E0423 23:16:35.304925 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.307998 kubelet[3014]: E0423 23:16:35.307951 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.309025 kubelet[3014]: E0423 23:16:35.308974 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.797411 kubelet[3014]: I0423 23:16:35.797385 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.914715 kubelet[3014]: E0423 23:16:35.914666 3014 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-357a044314\" not found" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.980344 kubelet[3014]: I0423 23:16:35.980131 3014 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:35.980344 kubelet[3014]: E0423 23:16:35.980169 3014 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-357a044314\": node \"ci-4459.2.4-n-357a044314\" not found" Apr 23 23:16:36.055948 kubelet[3014]: I0423 23:16:36.055834 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.074367 kubelet[3014]: E0423 23:16:36.074247 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-357a044314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.074367 kubelet[3014]: I0423 23:16:36.074284 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.076070 kubelet[3014]: E0423 23:16:36.076035 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-357a044314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.076070 kubelet[3014]: I0423 23:16:36.076061 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.077476 kubelet[3014]: E0423 23:16:36.077453 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-357a044314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.140645 kubelet[3014]: I0423 23:16:36.140593 3014 apiserver.go:52] "Watching apiserver" Apr 23 23:16:36.155259 kubelet[3014]: I0423 23:16:36.155223 3014 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 23:16:36.309854 kubelet[3014]: I0423 23:16:36.309748 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.309854 kubelet[3014]: I0423 23:16:36.309766 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.312089 kubelet[3014]: E0423 23:16:36.311837 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-357a044314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:36.312296 kubelet[3014]: E0423 23:16:36.312280 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-357a044314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:37.887227 kubelet[3014]: I0423 23:16:37.886971 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:37.895700 kubelet[3014]: I0423 23:16:37.894893 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:38.297712 systemd[1]: Reload requested from client PID 3295 ('systemctl') (unit session-9.scope)... Apr 23 23:16:38.297725 systemd[1]: Reloading... Apr 23 23:16:38.363773 zram_generator::config[3345]: No configuration found. Apr 23 23:16:38.531192 systemd[1]: Reloading finished in 233 ms. Apr 23 23:16:38.551168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:38.565981 systemd[1]: kubelet.service: Deactivated successfully. Apr 23 23:16:38.566160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:38.566198 systemd[1]: kubelet.service: Consumed 567ms CPU time, 125.7M memory peak. Apr 23 23:16:38.568879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:16:38.695692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:16:38.707987 (kubelet)[3406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:16:38.812722 kubelet[3406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:16:38.813910 kubelet[3406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:16:38.813910 kubelet[3406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:16:38.814018 kubelet[3406]: I0423 23:16:38.813241 3406 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:16:38.819711 kubelet[3406]: I0423 23:16:38.819490 3406 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 23 23:16:38.819711 kubelet[3406]: I0423 23:16:38.819512 3406 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:16:38.819711 kubelet[3406]: I0423 23:16:38.819651 3406 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:16:38.820755 kubelet[3406]: I0423 23:16:38.820741 3406 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 23 23:16:38.822872 kubelet[3406]: I0423 23:16:38.822439 3406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:16:38.826745 kubelet[3406]: I0423 23:16:38.826722 3406 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:16:38.831196 kubelet[3406]: I0423 23:16:38.831158 3406 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 23 23:16:38.831500 kubelet[3406]: I0423 23:16:38.831451 3406 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:16:38.831689 kubelet[3406]: I0423 23:16:38.831475 3406 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-357a044314","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:16:38.831912 kubelet[3406]: I0423 23:16:38.831778 3406 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:16:38.831912 kubelet[3406]: I0423 23:16:38.831793 3406 container_manager_linux.go:303] "Creating device plugin manager" Apr 23 23:16:38.832176 kubelet[3406]: I0423 23:16:38.831835 3406 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:16:38.832370 kubelet[3406]: I0423 23:16:38.832315 3406 kubelet.go:480] "Attempting to sync node with API server" Apr 23 23:16:38.832370 kubelet[3406]: I0423 23:16:38.832330 3406 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:16:38.832370 kubelet[3406]: I0423 23:16:38.832352 3406 kubelet.go:386] "Adding apiserver pod source" Apr 23 23:16:38.832535 kubelet[3406]: I0423 23:16:38.832524 3406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:16:38.836212 kubelet[3406]: I0423 23:16:38.835247 3406 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:16:38.836212 kubelet[3406]: I0423 23:16:38.835619 3406 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:16:38.841360 kubelet[3406]: I0423 23:16:38.841343 3406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 23:16:38.841469 kubelet[3406]: I0423 23:16:38.841460 3406 server.go:1289] "Started kubelet" Apr 23 23:16:38.843305 kubelet[3406]: I0423 23:16:38.843291 3406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:16:38.848918 kubelet[3406]: I0423 23:16:38.848894 3406 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 23:16:38.849475 kubelet[3406]: I0423 23:16:38.849437 3406 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:16:38.849997 kubelet[3406]: I0423 23:16:38.849976 3406 server.go:317] "Adding debug handlers to kubelet server" Apr 23 23:16:38.851404 kubelet[3406]: I0423 23:16:38.851388 3406 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 23:16:38.851644 kubelet[3406]: E0423 23:16:38.851625 3406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-357a044314\" not found" Apr 23 23:16:38.854793 kubelet[3406]: I0423 23:16:38.854749 3406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:16:38.855380 kubelet[3406]: I0423 23:16:38.854923 3406 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:16:38.855380 kubelet[3406]: I0423 23:16:38.855070 3406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:16:38.857720 kubelet[3406]: I0423 23:16:38.857704 3406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 23:16:38.857895 kubelet[3406]: I0423 23:16:38.857885 3406 reconciler.go:26] "Reconciler: start to sync state" Apr 23 23:16:38.860048 kubelet[3406]: I0423 23:16:38.860029 3406 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 23:16:38.860129 kubelet[3406]: I0423 23:16:38.860121 3406 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 23:16:38.860187 kubelet[3406]: I0423 23:16:38.860181 3406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:16:38.860225 kubelet[3406]: I0423 23:16:38.860218 3406 kubelet.go:2436] "Starting kubelet main sync loop" Apr 23 23:16:38.860431 kubelet[3406]: E0423 23:16:38.860279 3406 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:16:38.863975 kubelet[3406]: I0423 23:16:38.863951 3406 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:16:38.864044 kubelet[3406]: I0423 23:16:38.864024 3406 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:16:38.868946 kubelet[3406]: I0423 23:16:38.868925 3406 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:16:38.874068 kubelet[3406]: E0423 23:16:38.873664 3406 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:16:38.923078 kubelet[3406]: I0423 23:16:38.923029 3406 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:16:38.923078 kubelet[3406]: I0423 23:16:38.923046 3406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:16:38.923078 kubelet[3406]: I0423 23:16:38.923066 3406 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:16:38.923238 kubelet[3406]: I0423 23:16:38.923170 3406 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 23 23:16:38.923238 kubelet[3406]: I0423 23:16:38.923177 3406 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 23 23:16:38.923238 kubelet[3406]: I0423 23:16:38.923196 3406 policy_none.go:49] "None policy: Start" Apr 23 23:16:38.923238 kubelet[3406]: I0423 23:16:38.923203 3406 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 23:16:38.923238 kubelet[3406]: I0423 23:16:38.923210 3406 state_mem.go:35] "Initializing new in-memory state store" Apr 23 23:16:38.923307 kubelet[3406]: I0423 23:16:38.923266 3406 state_mem.go:75] "Updated machine memory state" Apr 23 23:16:38.928831 kubelet[3406]: E0423 23:16:38.928808 3406 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:16:38.928979 kubelet[3406]: I0423 23:16:38.928963 3406 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:16:38.929015 kubelet[3406]: I0423 23:16:38.928979 3406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:16:38.929810 kubelet[3406]: I0423 23:16:38.929792 3406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:16:38.931858 kubelet[3406]: E0423 23:16:38.930321 3406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:16:38.962357 kubelet[3406]: I0423 23:16:38.962058 3406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:38.962576 kubelet[3406]: I0423 23:16:38.962081 3406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:38.962803 kubelet[3406]: I0423 23:16:38.962789 3406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:38.971903 kubelet[3406]: I0423 23:16:38.971879 3406 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:38.978095 kubelet[3406]: I0423 23:16:38.978065 3406 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:38.978171 kubelet[3406]: E0423 23:16:38.978112 3406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-357a044314\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:38.978484 kubelet[3406]: I0423 23:16:38.978464 3406 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:39.031926 kubelet[3406]: I0423 23:16:39.031664 3406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:39.048615 kubelet[3406]: I0423 23:16:39.048583 3406 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:39.048763 kubelet[3406]: I0423 23:16:39.048672 3406 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-357a044314" Apr 23 23:16:39.059751 kubelet[3406]: I0423 23:16:39.058757 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.059751 kubelet[3406]: I0423 23:16:39.058793 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.059751 kubelet[3406]: I0423 23:16:39.058810 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5ea112e66090ffabe54a050ff14dedb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-357a044314\" (UID: \"a5ea112e66090ffabe54a050ff14dedb\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.059751 kubelet[3406]: I0423 23:16:39.058829 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.059751 kubelet[3406]: I0423 23:16:39.058840 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.060162 kubelet[3406]: I0423 23:16:39.058851 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.060162 kubelet[3406]: I0423 23:16:39.058864 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.060162 kubelet[3406]: I0423 23:16:39.058961 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b8e35ede8507a6e85301a9c05532123-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-357a044314\" (UID: \"0b8e35ede8507a6e85301a9c05532123\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.159986 kubelet[3406]: I0423 23:16:39.159856 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ca18e2b052f1671b0f07d80c6ebf7e4-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-357a044314\" (UID: \"6ca18e2b052f1671b0f07d80c6ebf7e4\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.842921 kubelet[3406]: I0423 23:16:39.842877 3406 apiserver.go:52] "Watching apiserver" Apr 23 23:16:39.859190 kubelet[3406]: I0423 23:16:39.858576 3406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 23:16:39.901416 kubelet[3406]: I0423 23:16:39.901325 3406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.902191 kubelet[3406]: I0423 23:16:39.902114 3406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.920718 kubelet[3406]: I0423 23:16:39.920469 3406 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:39.920718 kubelet[3406]: I0423 23:16:39.920505 3406 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 23:16:39.920718 kubelet[3406]: E0423 23:16:39.920541 3406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-357a044314\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.920967 kubelet[3406]: E0423 23:16:39.920747 3406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-357a044314\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" Apr 23 23:16:39.921143 kubelet[3406]: I0423 23:16:39.921100 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-357a044314" podStartSLOduration=1.921052276 podStartE2EDuration="1.921052276s" podCreationTimestamp="2026-04-23 23:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:16:39.920925271 +0000 UTC m=+1.208692131" watchObservedRunningTime="2026-04-23 23:16:39.921052276 +0000 UTC m=+1.208819136" Apr 23 23:16:39.942773 kubelet[3406]: I0423 23:16:39.942722 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-357a044314" podStartSLOduration=2.942707015 podStartE2EDuration="2.942707015s" podCreationTimestamp="2026-04-23 23:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:16:39.931266859 +0000 UTC m=+1.219033719" watchObservedRunningTime="2026-04-23 23:16:39.942707015 +0000 UTC m=+1.230473875" Apr 23 23:16:39.943662 kubelet[3406]: I0423 23:16:39.943593 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-357a044314" podStartSLOduration=1.942995313 podStartE2EDuration="1.942995313s" podCreationTimestamp="2026-04-23 23:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:16:39.942997577 +0000 UTC m=+1.230764437" watchObservedRunningTime="2026-04-23 23:16:39.942995313 +0000 UTC m=+1.230762181" Apr 23 23:16:43.581131 kubelet[3406]: I0423 23:16:43.580980 3406 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 23 23:16:43.582203 containerd[1896]: time="2026-04-23T23:16:43.581859014Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 23 23:16:43.582422 kubelet[3406]: I0423 23:16:43.582007 3406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 23 23:16:43.708448 systemd[1]: Created slice kubepods-besteffort-pode52f78f7_0605_451b_85a2_ae449682b8b6.slice - libcontainer container kubepods-besteffort-pode52f78f7_0605_451b_85a2_ae449682b8b6.slice. Apr 23 23:16:43.782565 kubelet[3406]: I0423 23:16:43.782516 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdskp\" (UniqueName: \"kubernetes.io/projected/e52f78f7-0605-451b-85a2-ae449682b8b6-kube-api-access-qdskp\") pod \"kube-proxy-ztfwh\" (UID: \"e52f78f7-0605-451b-85a2-ae449682b8b6\") " pod="kube-system/kube-proxy-ztfwh" Apr 23 23:16:43.782565 kubelet[3406]: I0423 23:16:43.782563 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e52f78f7-0605-451b-85a2-ae449682b8b6-kube-proxy\") pod \"kube-proxy-ztfwh\" (UID: \"e52f78f7-0605-451b-85a2-ae449682b8b6\") " pod="kube-system/kube-proxy-ztfwh" Apr 23 23:16:43.782565 kubelet[3406]: I0423 23:16:43.782580 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e52f78f7-0605-451b-85a2-ae449682b8b6-lib-modules\") pod \"kube-proxy-ztfwh\" (UID: \"e52f78f7-0605-451b-85a2-ae449682b8b6\") " pod="kube-system/kube-proxy-ztfwh" Apr 23 23:16:43.782774 kubelet[3406]: I0423 23:16:43.782592 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e52f78f7-0605-451b-85a2-ae449682b8b6-xtables-lock\") pod \"kube-proxy-ztfwh\" (UID: \"e52f78f7-0605-451b-85a2-ae449682b8b6\") " pod="kube-system/kube-proxy-ztfwh" Apr 23 23:16:43.887172 kubelet[3406]: E0423 23:16:43.886942 3406 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 23 23:16:43.887172 kubelet[3406]: E0423 23:16:43.886972 3406 projected.go:194] Error preparing data for projected volume kube-api-access-qdskp for pod kube-system/kube-proxy-ztfwh: configmap "kube-root-ca.crt" not found Apr 23 23:16:43.887172 kubelet[3406]: E0423 23:16:43.887020 3406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e52f78f7-0605-451b-85a2-ae449682b8b6-kube-api-access-qdskp podName:e52f78f7-0605-451b-85a2-ae449682b8b6 nodeName:}" failed. No retries permitted until 2026-04-23 23:16:44.387001573 +0000 UTC m=+5.674768433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qdskp" (UniqueName: "kubernetes.io/projected/e52f78f7-0605-451b-85a2-ae449682b8b6-kube-api-access-qdskp") pod "kube-proxy-ztfwh" (UID: "e52f78f7-0605-451b-85a2-ae449682b8b6") : configmap "kube-root-ca.crt" not found Apr 23 23:16:44.617133 containerd[1896]: time="2026-04-23T23:16:44.616832768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztfwh,Uid:e52f78f7-0605-451b-85a2-ae449682b8b6,Namespace:kube-system,Attempt:0,}" Apr 23 23:16:44.653697 containerd[1896]: time="2026-04-23T23:16:44.653642476Z" level=info msg="connecting to shim 726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42" address="unix:///run/containerd/s/70dd4f2d9e112a947b90fc0263c35ea345974f06dcd66367ec1d699d69ba697e" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:44.674823 systemd[1]: Started cri-containerd-726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42.scope - libcontainer container 726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42. Apr 23 23:16:44.696107 containerd[1896]: time="2026-04-23T23:16:44.696072450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztfwh,Uid:e52f78f7-0605-451b-85a2-ae449682b8b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42\"" Apr 23 23:16:44.705609 containerd[1896]: time="2026-04-23T23:16:44.705576192Z" level=info msg="CreateContainer within sandbox \"726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 23 23:16:44.730899 containerd[1896]: time="2026-04-23T23:16:44.730167829Z" level=info msg="Container 4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:44.732915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033994393.mount: Deactivated successfully. Apr 23 23:16:44.749622 containerd[1896]: time="2026-04-23T23:16:44.749564006Z" level=info msg="CreateContainer within sandbox \"726ed6cb292ac2e79a2f98536d632506d76192a229da15f7a85a0959d29a4d42\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298\"" Apr 23 23:16:44.750553 containerd[1896]: time="2026-04-23T23:16:44.750509336Z" level=info msg="StartContainer for \"4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298\"" Apr 23 23:16:44.751559 containerd[1896]: time="2026-04-23T23:16:44.751488364Z" level=info msg="connecting to shim 4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298" address="unix:///run/containerd/s/70dd4f2d9e112a947b90fc0263c35ea345974f06dcd66367ec1d699d69ba697e" protocol=ttrpc version=3 Apr 23 23:16:44.768859 systemd[1]: Started cri-containerd-4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298.scope - libcontainer container 4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298. Apr 23 23:16:44.830134 containerd[1896]: time="2026-04-23T23:16:44.829922697Z" level=info msg="StartContainer for \"4b64361db73eb4ba8675e97ce97760ffbab2d1c51aec788feab08c15ac975298\" returns successfully" Apr 23 23:16:44.887401 systemd[1]: Created slice kubepods-besteffort-podc8b19a84_f1ea_49f6_a626_c1799c69a4b9.slice - libcontainer container kubepods-besteffort-podc8b19a84_f1ea_49f6_a626_c1799c69a4b9.slice. Apr 23 23:16:44.888316 kubelet[3406]: I0423 23:16:44.888121 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8b19a84-f1ea-49f6-a626-c1799c69a4b9-var-lib-calico\") pod \"tigera-operator-8458958b4d-szfwq\" (UID: \"c8b19a84-f1ea-49f6-a626-c1799c69a4b9\") " pod="tigera-operator/tigera-operator-8458958b4d-szfwq" Apr 23 23:16:44.888316 kubelet[3406]: I0423 23:16:44.888153 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8bh\" (UniqueName: \"kubernetes.io/projected/c8b19a84-f1ea-49f6-a626-c1799c69a4b9-kube-api-access-4p8bh\") pod \"tigera-operator-8458958b4d-szfwq\" (UID: \"c8b19a84-f1ea-49f6-a626-c1799c69a4b9\") " pod="tigera-operator/tigera-operator-8458958b4d-szfwq" Apr 23 23:16:45.191826 containerd[1896]: time="2026-04-23T23:16:45.191721430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-szfwq,Uid:c8b19a84-f1ea-49f6-a626-c1799c69a4b9,Namespace:tigera-operator,Attempt:0,}" Apr 23 23:16:45.234959 containerd[1896]: time="2026-04-23T23:16:45.234892615Z" level=info msg="connecting to shim 89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2" address="unix:///run/containerd/s/3966ba8e0145f03b18e3ec2e6f323a7996024589102bfd92a81222f71b07aad4" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:45.258823 systemd[1]: Started cri-containerd-89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2.scope - libcontainer container 89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2. Apr 23 23:16:45.287003 containerd[1896]: time="2026-04-23T23:16:45.286966352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-szfwq,Uid:c8b19a84-f1ea-49f6-a626-c1799c69a4b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2\"" Apr 23 23:16:45.289438 containerd[1896]: time="2026-04-23T23:16:45.288367978Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 23 23:16:47.643645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432976319.mount: Deactivated successfully. Apr 23 23:16:49.547399 kubelet[3406]: I0423 23:16:49.546907 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztfwh" podStartSLOduration=6.546892869 podStartE2EDuration="6.546892869s" podCreationTimestamp="2026-04-23 23:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:16:44.93077646 +0000 UTC m=+6.218543392" watchObservedRunningTime="2026-04-23 23:16:49.546892869 +0000 UTC m=+10.834659729" Apr 23 23:16:49.567130 containerd[1896]: time="2026-04-23T23:16:49.567089269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:49.572475 containerd[1896]: time="2026-04-23T23:16:49.572436667Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=24868969" Apr 23 23:16:49.575564 containerd[1896]: time="2026-04-23T23:16:49.575533717Z" level=info msg="ImageCreate event name:\"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:49.579799 containerd[1896]: time="2026-04-23T23:16:49.579767121Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:49.580223 containerd[1896]: time="2026-04-23T23:16:49.580198321Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"24864964\" in 4.290685797s" Apr 23 23:16:49.580246 containerd[1896]: time="2026-04-23T23:16:49.580227546Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\"" Apr 23 23:16:49.589016 containerd[1896]: time="2026-04-23T23:16:49.588662473Z" level=info msg="CreateContainer within sandbox \"89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 23 23:16:49.605668 containerd[1896]: time="2026-04-23T23:16:49.605633026Z" level=info msg="Container 5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:49.608178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183091055.mount: Deactivated successfully. Apr 23 23:16:49.621006 containerd[1896]: time="2026-04-23T23:16:49.620969663Z" level=info msg="CreateContainer within sandbox \"89ef1013b0073fa399559bc35a20c889e4102a93cf802582284830e712d2dbe2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9\"" Apr 23 23:16:49.622532 containerd[1896]: time="2026-04-23T23:16:49.621603167Z" level=info msg="StartContainer for \"5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9\"" Apr 23 23:16:49.623429 containerd[1896]: time="2026-04-23T23:16:49.623297253Z" level=info msg="connecting to shim 5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9" address="unix:///run/containerd/s/3966ba8e0145f03b18e3ec2e6f323a7996024589102bfd92a81222f71b07aad4" protocol=ttrpc version=3 Apr 23 23:16:49.642866 systemd[1]: Started cri-containerd-5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9.scope - libcontainer container 5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9. Apr 23 23:16:49.670227 containerd[1896]: time="2026-04-23T23:16:49.670193630Z" level=info msg="StartContainer for \"5b1b9e794cfe95837249897cfcf6d0ee69ace10af00a051287a8a08abdc59ee9\" returns successfully" Apr 23 23:16:49.999227 kubelet[3406]: I0423 23:16:49.999064 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8458958b4d-szfwq" podStartSLOduration=1.706165993 podStartE2EDuration="5.999047622s" podCreationTimestamp="2026-04-23 23:16:44 +0000 UTC" firstStartedPulling="2026-04-23 23:16:45.288053463 +0000 UTC m=+6.575820315" lastFinishedPulling="2026-04-23 23:16:49.580935084 +0000 UTC m=+10.868701944" observedRunningTime="2026-04-23 23:16:49.93538394 +0000 UTC m=+11.223150800" watchObservedRunningTime="2026-04-23 23:16:49.999047622 +0000 UTC m=+11.286814482" Apr 23 23:16:54.830475 sudo[2379]: pam_unix(sudo:session): session closed for user root Apr 23 23:16:54.976154 sshd[2370]: Connection closed by 50.85.169.122 port 50596 Apr 23 23:16:54.978945 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:54.983591 systemd[1]: sshd@6-10.0.0.29:22-50.85.169.122:50596.service: Deactivated successfully. Apr 23 23:16:54.985636 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 23:16:54.988245 systemd[1]: session-9.scope: Consumed 3.448s CPU time, 223.2M memory peak. Apr 23 23:16:54.989731 systemd-logind[1874]: Session 9 logged out. Waiting for processes to exit. Apr 23 23:16:54.992954 systemd-logind[1874]: Removed session 9. Apr 23 23:16:57.047369 systemd[1]: Created slice kubepods-besteffort-pod198dd700_78ee_45b0_bd1b_061f79c800f2.slice - libcontainer container kubepods-besteffort-pod198dd700_78ee_45b0_bd1b_061f79c800f2.slice. Apr 23 23:16:57.148819 kubelet[3406]: I0423 23:16:57.148235 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blczg\" (UniqueName: \"kubernetes.io/projected/198dd700-78ee-45b0-bd1b-061f79c800f2-kube-api-access-blczg\") pod \"calico-typha-578567f84d-ff89g\" (UID: \"198dd700-78ee-45b0-bd1b-061f79c800f2\") " pod="calico-system/calico-typha-578567f84d-ff89g" Apr 23 23:16:57.148819 kubelet[3406]: I0423 23:16:57.148288 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198dd700-78ee-45b0-bd1b-061f79c800f2-tigera-ca-bundle\") pod \"calico-typha-578567f84d-ff89g\" (UID: \"198dd700-78ee-45b0-bd1b-061f79c800f2\") " pod="calico-system/calico-typha-578567f84d-ff89g" Apr 23 23:16:57.149515 kubelet[3406]: I0423 23:16:57.149285 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/198dd700-78ee-45b0-bd1b-061f79c800f2-typha-certs\") pod \"calico-typha-578567f84d-ff89g\" (UID: \"198dd700-78ee-45b0-bd1b-061f79c800f2\") " pod="calico-system/calico-typha-578567f84d-ff89g" Apr 23 23:16:57.165517 systemd[1]: Created slice kubepods-besteffort-podf99622ee_0ca3_4ed1_8331_7485fe9cc1da.slice - libcontainer container kubepods-besteffort-podf99622ee_0ca3_4ed1_8331_7485fe9cc1da.slice. Apr 23 23:16:57.264745 kubelet[3406]: E0423 23:16:57.264706 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:16:57.351609 kubelet[3406]: I0423 23:16:57.350660 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-bpffs\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351609 kubelet[3406]: I0423 23:16:57.350725 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-nodeproc\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351609 kubelet[3406]: I0423 23:16:57.350738 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-policysync\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351609 kubelet[3406]: I0423 23:16:57.350752 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-cni-net-dir\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351609 kubelet[3406]: I0423 23:16:57.350764 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-flexvol-driver-host\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351812 kubelet[3406]: I0423 23:16:57.350775 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-xtables-lock\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351812 kubelet[3406]: I0423 23:16:57.350786 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-cni-bin-dir\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351812 kubelet[3406]: I0423 23:16:57.350799 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-var-lib-calico\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351812 kubelet[3406]: I0423 23:16:57.350810 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-var-run-calico\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351812 kubelet[3406]: I0423 23:16:57.350819 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk64\" (UniqueName: \"kubernetes.io/projected/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-kube-api-access-qrk64\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351890 kubelet[3406]: I0423 23:16:57.350829 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-sys-fs\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351890 kubelet[3406]: I0423 23:16:57.350839 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-lib-modules\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351890 kubelet[3406]: I0423 23:16:57.350847 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-node-certs\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351890 kubelet[3406]: I0423 23:16:57.350856 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-cni-log-dir\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351890 kubelet[3406]: I0423 23:16:57.350865 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f99622ee-0ca3-4ed1-8331-7485fe9cc1da-tigera-ca-bundle\") pod \"calico-node-b7n26\" (UID: \"f99622ee-0ca3-4ed1-8331-7485fe9cc1da\") " pod="calico-system/calico-node-b7n26" Apr 23 23:16:57.351968 containerd[1896]: time="2026-04-23T23:16:57.351795669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578567f84d-ff89g,Uid:198dd700-78ee-45b0-bd1b-061f79c800f2,Namespace:calico-system,Attempt:0,}" Apr 23 23:16:57.400622 containerd[1896]: time="2026-04-23T23:16:57.400560100Z" level=info msg="connecting to shim b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698" address="unix:///run/containerd/s/b2fc8b7c8d7fbbc2030e978f1f4404d062534c36c6a58fd09d80dfcbecf238cb" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:57.423846 systemd[1]: Started cri-containerd-b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698.scope - libcontainer container b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698. Apr 23 23:16:57.452800 kubelet[3406]: I0423 23:16:57.451508 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dc579ba9-ac2c-4e73-9991-4e4cc7b4cece-socket-dir\") pod \"csi-node-driver-jd28b\" (UID: \"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece\") " pod="calico-system/csi-node-driver-jd28b" Apr 23 23:16:57.452800 kubelet[3406]: I0423 23:16:57.451672 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc579ba9-ac2c-4e73-9991-4e4cc7b4cece-kubelet-dir\") pod \"csi-node-driver-jd28b\" (UID: \"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece\") " pod="calico-system/csi-node-driver-jd28b" Apr 23 23:16:57.452800 kubelet[3406]: I0423 23:16:57.451716 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dc579ba9-ac2c-4e73-9991-4e4cc7b4cece-registration-dir\") pod \"csi-node-driver-jd28b\" (UID: \"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece\") " pod="calico-system/csi-node-driver-jd28b" Apr 23 23:16:57.452800 kubelet[3406]: I0423 23:16:57.451733 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dc579ba9-ac2c-4e73-9991-4e4cc7b4cece-varrun\") pod \"csi-node-driver-jd28b\" (UID: \"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece\") " pod="calico-system/csi-node-driver-jd28b" Apr 23 23:16:57.452800 kubelet[3406]: I0423 23:16:57.451918 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr98x\" (UniqueName: \"kubernetes.io/projected/dc579ba9-ac2c-4e73-9991-4e4cc7b4cece-kube-api-access-rr98x\") pod \"csi-node-driver-jd28b\" (UID: \"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece\") " pod="calico-system/csi-node-driver-jd28b" Apr 23 23:16:57.456635 kubelet[3406]: E0423 23:16:57.456605 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.456635 kubelet[3406]: W0423 23:16:57.456627 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.456917 kubelet[3406]: E0423 23:16:57.456901 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.457234 kubelet[3406]: E0423 23:16:57.457219 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.457234 kubelet[3406]: W0423 23:16:57.457232 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.457278 kubelet[3406]: E0423 23:16:57.457242 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.457824 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.458690 kubelet[3406]: W0423 23:16:57.457839 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.457849 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.458290 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.458690 kubelet[3406]: W0423 23:16:57.458301 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.458310 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.458599 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.458690 kubelet[3406]: W0423 23:16:57.458605 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.458690 kubelet[3406]: E0423 23:16:57.458613 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.459109 kubelet[3406]: E0423 23:16:57.459092 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.459109 kubelet[3406]: W0423 23:16:57.459107 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.459638 kubelet[3406]: E0423 23:16:57.459116 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.461173 kubelet[3406]: E0423 23:16:57.461150 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.461173 kubelet[3406]: W0423 23:16:57.461166 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.461173 kubelet[3406]: E0423 23:16:57.461177 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.463700 containerd[1896]: time="2026-04-23T23:16:57.461907404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578567f84d-ff89g,Uid:198dd700-78ee-45b0-bd1b-061f79c800f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698\"" Apr 23 23:16:57.464765 kubelet[3406]: E0423 23:16:57.464551 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.467345 kubelet[3406]: W0423 23:16:57.467318 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.467345 kubelet[3406]: E0423 23:16:57.467343 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.467953 kubelet[3406]: E0423 23:16:57.467896 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.467953 kubelet[3406]: W0423 23:16:57.467908 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.467953 kubelet[3406]: E0423 23:16:57.467924 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.468709 kubelet[3406]: E0423 23:16:57.468241 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.468709 kubelet[3406]: W0423 23:16:57.468248 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.468709 kubelet[3406]: E0423 23:16:57.468256 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.469071 containerd[1896]: time="2026-04-23T23:16:57.468886393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469097 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.469816 kubelet[3406]: W0423 23:16:57.469114 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469124 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469307 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.469816 kubelet[3406]: W0423 23:16:57.469314 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469323 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469453 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.469816 kubelet[3406]: W0423 23:16:57.469459 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469465 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.469816 kubelet[3406]: E0423 23:16:57.469626 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.469983 kubelet[3406]: W0423 23:16:57.469633 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.469983 kubelet[3406]: E0423 23:16:57.469650 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.470797 kubelet[3406]: E0423 23:16:57.470781 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.470907 kubelet[3406]: W0423 23:16:57.470895 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.470962 kubelet[3406]: E0423 23:16:57.470952 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.471417 kubelet[3406]: E0423 23:16:57.471364 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.471417 kubelet[3406]: W0423 23:16:57.471381 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.471417 kubelet[3406]: E0423 23:16:57.471393 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.474854 kubelet[3406]: E0423 23:16:57.474834 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.474854 kubelet[3406]: W0423 23:16:57.474850 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.474947 kubelet[3406]: E0423 23:16:57.474860 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.553414 kubelet[3406]: E0423 23:16:57.553380 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.553414 kubelet[3406]: W0423 23:16:57.553404 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.553414 kubelet[3406]: E0423 23:16:57.553424 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.553709 kubelet[3406]: E0423 23:16:57.553571 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.553709 kubelet[3406]: W0423 23:16:57.553578 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.553709 kubelet[3406]: E0423 23:16:57.553584 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.553777 kubelet[3406]: E0423 23:16:57.553763 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.553777 kubelet[3406]: W0423 23:16:57.553771 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.553943 kubelet[3406]: E0423 23:16:57.553777 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.554044 kubelet[3406]: E0423 23:16:57.554028 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.554177 kubelet[3406]: W0423 23:16:57.554072 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.554177 kubelet[3406]: E0423 23:16:57.554087 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.554386 kubelet[3406]: E0423 23:16:57.554373 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.554727 kubelet[3406]: W0423 23:16:57.554586 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.554727 kubelet[3406]: E0423 23:16:57.554606 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.554965 kubelet[3406]: E0423 23:16:57.554925 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.554965 kubelet[3406]: W0423 23:16:57.554938 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.555112 kubelet[3406]: E0423 23:16:57.554951 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.555293 kubelet[3406]: E0423 23:16:57.555282 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.555364 kubelet[3406]: W0423 23:16:57.555355 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.555427 kubelet[3406]: E0423 23:16:57.555402 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.555692 kubelet[3406]: E0423 23:16:57.555630 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.555692 kubelet[3406]: W0423 23:16:57.555647 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.555692 kubelet[3406]: E0423 23:16:57.555659 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.555855 kubelet[3406]: E0423 23:16:57.555792 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.555855 kubelet[3406]: W0423 23:16:57.555804 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.555855 kubelet[3406]: E0423 23:16:57.555825 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.556011 kubelet[3406]: E0423 23:16:57.555933 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.556011 kubelet[3406]: W0423 23:16:57.555944 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.556011 kubelet[3406]: E0423 23:16:57.555952 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.556369 kubelet[3406]: E0423 23:16:57.556345 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.557008 kubelet[3406]: W0423 23:16:57.556983 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.557008 kubelet[3406]: E0423 23:16:57.557009 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.557178 kubelet[3406]: E0423 23:16:57.557168 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.557178 kubelet[3406]: W0423 23:16:57.557175 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.557279 kubelet[3406]: E0423 23:16:57.557181 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.557295 kubelet[3406]: E0423 23:16:57.557288 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.557358 kubelet[3406]: W0423 23:16:57.557296 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.557358 kubelet[3406]: E0423 23:16:57.557304 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.557824 kubelet[3406]: E0423 23:16:57.557804 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.557824 kubelet[3406]: W0423 23:16:57.557821 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.558004 kubelet[3406]: E0423 23:16:57.557832 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.558004 kubelet[3406]: E0423 23:16:57.557966 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.558004 kubelet[3406]: W0423 23:16:57.557972 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.558004 kubelet[3406]: E0423 23:16:57.557977 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.558309 kubelet[3406]: E0423 23:16:57.558273 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.558309 kubelet[3406]: W0423 23:16:57.558285 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.558309 kubelet[3406]: E0423 23:16:57.558295 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.558513 kubelet[3406]: E0423 23:16:57.558430 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.558513 kubelet[3406]: W0423 23:16:57.558506 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.558963 kubelet[3406]: E0423 23:16:57.558521 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.559228 kubelet[3406]: E0423 23:16:57.559155 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.559228 kubelet[3406]: W0423 23:16:57.559183 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.559228 kubelet[3406]: E0423 23:16:57.559200 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.559772 kubelet[3406]: E0423 23:16:57.559746 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.559772 kubelet[3406]: W0423 23:16:57.559765 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.559772 kubelet[3406]: E0423 23:16:57.559775 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.560773 kubelet[3406]: E0423 23:16:57.560757 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.560773 kubelet[3406]: W0423 23:16:57.560770 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.560910 kubelet[3406]: E0423 23:16:57.560783 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.560992 kubelet[3406]: E0423 23:16:57.560928 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.560992 kubelet[3406]: W0423 23:16:57.560934 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.560992 kubelet[3406]: E0423 23:16:57.560940 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.561147 kubelet[3406]: E0423 23:16:57.561033 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.561147 kubelet[3406]: W0423 23:16:57.561038 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.561147 kubelet[3406]: E0423 23:16:57.561043 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.561147 kubelet[3406]: E0423 23:16:57.561139 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.561147 kubelet[3406]: W0423 23:16:57.561143 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.561147 kubelet[3406]: E0423 23:16:57.561148 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.561357 kubelet[3406]: E0423 23:16:57.561245 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.561357 kubelet[3406]: W0423 23:16:57.561250 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.561357 kubelet[3406]: E0423 23:16:57.561255 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.561357 kubelet[3406]: E0423 23:16:57.561353 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.561357 kubelet[3406]: W0423 23:16:57.561358 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.561465 kubelet[3406]: E0423 23:16:57.561363 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.566989 kubelet[3406]: E0423 23:16:57.566912 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:57.566989 kubelet[3406]: W0423 23:16:57.566934 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:57.566989 kubelet[3406]: E0423 23:16:57.566954 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:57.769832 containerd[1896]: time="2026-04-23T23:16:57.769142604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7n26,Uid:f99622ee-0ca3-4ed1-8331-7485fe9cc1da,Namespace:calico-system,Attempt:0,}" Apr 23 23:16:57.816248 containerd[1896]: time="2026-04-23T23:16:57.815984389Z" level=info msg="connecting to shim 3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24" address="unix:///run/containerd/s/8cdfe13ca70b697ae96cc27c2b268c024f800dc37c09ee888d2f8ec0c2d5d240" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:16:57.835844 systemd[1]: Started cri-containerd-3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24.scope - libcontainer container 3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24. Apr 23 23:16:57.862152 containerd[1896]: time="2026-04-23T23:16:57.862105917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7n26,Uid:f99622ee-0ca3-4ed1-8331-7485fe9cc1da,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\"" Apr 23 23:16:58.861438 kubelet[3406]: E0423 23:16:58.860983 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:16:59.058384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726996249.mount: Deactivated successfully. Apr 23 23:16:59.756283 containerd[1896]: time="2026-04-23T23:16:59.755675588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:59.758609 containerd[1896]: time="2026-04-23T23:16:59.758578981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32841445" Apr 23 23:16:59.761729 containerd[1896]: time="2026-04-23T23:16:59.761701238Z" level=info msg="ImageCreate event name:\"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:59.766938 containerd[1896]: time="2026-04-23T23:16:59.766897251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:16:59.768265 containerd[1896]: time="2026-04-23T23:16:59.768225163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"32841299\" in 2.299304433s" Apr 23 23:16:59.768265 containerd[1896]: time="2026-04-23T23:16:59.768261340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\"" Apr 23 23:16:59.769287 containerd[1896]: time="2026-04-23T23:16:59.769264369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 23 23:16:59.786715 containerd[1896]: time="2026-04-23T23:16:59.786555843Z" level=info msg="CreateContainer within sandbox \"b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 23 23:16:59.807925 containerd[1896]: time="2026-04-23T23:16:59.807168254Z" level=info msg="Container b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:16:59.807766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4210944659.mount: Deactivated successfully. Apr 23 23:16:59.826076 containerd[1896]: time="2026-04-23T23:16:59.826028778Z" level=info msg="CreateContainer within sandbox \"b38ddd972576028d035ddf337902cc3b1b833e8b3a5056e1ac85440038657698\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4\"" Apr 23 23:16:59.828723 containerd[1896]: time="2026-04-23T23:16:59.827833443Z" level=info msg="StartContainer for \"b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4\"" Apr 23 23:16:59.828723 containerd[1896]: time="2026-04-23T23:16:59.828630888Z" level=info msg="connecting to shim b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4" address="unix:///run/containerd/s/b2fc8b7c8d7fbbc2030e978f1f4404d062534c36c6a58fd09d80dfcbecf238cb" protocol=ttrpc version=3 Apr 23 23:16:59.846834 systemd[1]: Started cri-containerd-b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4.scope - libcontainer container b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4. Apr 23 23:16:59.886857 containerd[1896]: time="2026-04-23T23:16:59.886815925Z" level=info msg="StartContainer for \"b2701128a9c2a402f45446a2f839e77640c1efdb5a21194fb5640f6181b5bdd4\" returns successfully" Apr 23 23:16:59.970032 kubelet[3406]: E0423 23:16:59.969717 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.970032 kubelet[3406]: W0423 23:16:59.969742 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.970032 kubelet[3406]: E0423 23:16:59.969763 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.970525 kubelet[3406]: I0423 23:16:59.970471 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-578567f84d-ff89g" podStartSLOduration=0.669422005 podStartE2EDuration="2.970460781s" podCreationTimestamp="2026-04-23 23:16:57 +0000 UTC" firstStartedPulling="2026-04-23 23:16:57.468047746 +0000 UTC m=+18.755814606" lastFinishedPulling="2026-04-23 23:16:59.769086522 +0000 UTC m=+21.056853382" observedRunningTime="2026-04-23 23:16:59.969156398 +0000 UTC m=+21.256923306" watchObservedRunningTime="2026-04-23 23:16:59.970460781 +0000 UTC m=+21.258227649" Apr 23 23:16:59.971287 kubelet[3406]: E0423 23:16:59.971063 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.971287 kubelet[3406]: W0423 23:16:59.971080 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.971287 kubelet[3406]: E0423 23:16:59.971115 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.971635 kubelet[3406]: E0423 23:16:59.971538 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.971884 kubelet[3406]: W0423 23:16:59.971699 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.971884 kubelet[3406]: E0423 23:16:59.971717 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.972185 kubelet[3406]: E0423 23:16:59.972088 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.972346 kubelet[3406]: W0423 23:16:59.972329 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.972492 kubelet[3406]: E0423 23:16:59.972397 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.972914 kubelet[3406]: E0423 23:16:59.972781 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.972914 kubelet[3406]: W0423 23:16:59.972792 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.972914 kubelet[3406]: E0423 23:16:59.972803 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.973561 kubelet[3406]: E0423 23:16:59.973535 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.973809 kubelet[3406]: W0423 23:16:59.973693 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.973809 kubelet[3406]: E0423 23:16:59.973710 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.974164 kubelet[3406]: E0423 23:16:59.974116 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.974164 kubelet[3406]: W0423 23:16:59.974131 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.974164 kubelet[3406]: E0423 23:16:59.974141 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.975051 kubelet[3406]: E0423 23:16:59.974803 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.975051 kubelet[3406]: W0423 23:16:59.974817 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.975051 kubelet[3406]: E0423 23:16:59.974828 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.975516 kubelet[3406]: E0423 23:16:59.975470 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.975821 kubelet[3406]: W0423 23:16:59.975649 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.975821 kubelet[3406]: E0423 23:16:59.975666 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.976025 kubelet[3406]: E0423 23:16:59.975929 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.976087 kubelet[3406]: W0423 23:16:59.976075 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.976212 kubelet[3406]: E0423 23:16:59.976142 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.976690 kubelet[3406]: E0423 23:16:59.976603 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.977009 kubelet[3406]: W0423 23:16:59.976847 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.977009 kubelet[3406]: E0423 23:16:59.976869 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.977354 kubelet[3406]: E0423 23:16:59.977240 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.977487 kubelet[3406]: W0423 23:16:59.977453 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.977645 kubelet[3406]: E0423 23:16:59.977597 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.978133 kubelet[3406]: E0423 23:16:59.978110 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.978335 kubelet[3406]: W0423 23:16:59.978322 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.978482 kubelet[3406]: E0423 23:16:59.978384 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.978857 kubelet[3406]: E0423 23:16:59.978818 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.979130 kubelet[3406]: W0423 23:16:59.978836 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.979130 kubelet[3406]: E0423 23:16:59.978994 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.979959 kubelet[3406]: E0423 23:16:59.979727 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.979959 kubelet[3406]: W0423 23:16:59.979743 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.979959 kubelet[3406]: E0423 23:16:59.979753 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.981714 kubelet[3406]: E0423 23:16:59.980843 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.981817 kubelet[3406]: W0423 23:16:59.981801 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.981869 kubelet[3406]: E0423 23:16:59.981860 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.982252 kubelet[3406]: E0423 23:16:59.982193 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.982252 kubelet[3406]: W0423 23:16:59.982205 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.982252 kubelet[3406]: E0423 23:16:59.982214 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.982422 kubelet[3406]: E0423 23:16:59.982403 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.982459 kubelet[3406]: W0423 23:16:59.982424 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.982459 kubelet[3406]: E0423 23:16:59.982435 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.982641 kubelet[3406]: E0423 23:16:59.982629 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.982641 kubelet[3406]: W0423 23:16:59.982638 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.982732 kubelet[3406]: E0423 23:16:59.982646 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.982777 kubelet[3406]: E0423 23:16:59.982765 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.982777 kubelet[3406]: W0423 23:16:59.982771 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.982824 kubelet[3406]: E0423 23:16:59.982777 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.982926 kubelet[3406]: E0423 23:16:59.982915 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.982926 kubelet[3406]: W0423 23:16:59.982922 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.982976 kubelet[3406]: E0423 23:16:59.982930 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.983354 kubelet[3406]: E0423 23:16:59.983336 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.983354 kubelet[3406]: W0423 23:16:59.983348 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.983354 kubelet[3406]: E0423 23:16:59.983357 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.984764 kubelet[3406]: E0423 23:16:59.984744 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.984764 kubelet[3406]: W0423 23:16:59.984759 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.984839 kubelet[3406]: E0423 23:16:59.984771 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.985772 kubelet[3406]: E0423 23:16:59.985752 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.985772 kubelet[3406]: W0423 23:16:59.985766 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.986134 kubelet[3406]: E0423 23:16:59.985778 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.986134 kubelet[3406]: E0423 23:16:59.985949 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.986134 kubelet[3406]: W0423 23:16:59.985956 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.986134 kubelet[3406]: E0423 23:16:59.985963 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.986134 kubelet[3406]: E0423 23:16:59.986054 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.986134 kubelet[3406]: W0423 23:16:59.986059 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.986134 kubelet[3406]: E0423 23:16:59.986064 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.986571 kubelet[3406]: E0423 23:16:59.986344 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.986571 kubelet[3406]: W0423 23:16:59.986359 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.986571 kubelet[3406]: E0423 23:16:59.986370 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.986910 kubelet[3406]: E0423 23:16:59.986887 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.986910 kubelet[3406]: W0423 23:16:59.986901 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.986910 kubelet[3406]: E0423 23:16:59.986912 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.987704 kubelet[3406]: E0423 23:16:59.987266 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.987904 kubelet[3406]: W0423 23:16:59.987805 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.987904 kubelet[3406]: E0423 23:16:59.987827 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.988138 kubelet[3406]: E0423 23:16:59.988124 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.988283 kubelet[3406]: W0423 23:16:59.988197 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.988283 kubelet[3406]: E0423 23:16:59.988215 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.988610 kubelet[3406]: E0423 23:16:59.988597 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.988759 kubelet[3406]: W0423 23:16:59.988747 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.989518 kubelet[3406]: E0423 23:16:59.989498 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.989794 kubelet[3406]: E0423 23:16:59.989783 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.989893 kubelet[3406]: W0423 23:16:59.989864 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.989893 kubelet[3406]: E0423 23:16:59.989881 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:16:59.990153 kubelet[3406]: E0423 23:16:59.990117 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:16:59.990153 kubelet[3406]: W0423 23:16:59.990128 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:16:59.990153 kubelet[3406]: E0423 23:16:59.990137 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.861491 kubelet[3406]: E0423 23:17:00.861170 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:00.951187 kubelet[3406]: I0423 23:17:00.951154 3406 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:17:00.986153 kubelet[3406]: E0423 23:17:00.986082 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.987950 kubelet[3406]: W0423 23:17:00.986197 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.986220 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.987129 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.987950 kubelet[3406]: W0423 23:17:00.987143 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.987161 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.987327 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.987950 kubelet[3406]: W0423 23:17:00.987339 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.987348 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.987950 kubelet[3406]: E0423 23:17:00.987712 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.987950 kubelet[3406]: W0423 23:17:00.987723 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988137 kubelet[3406]: E0423 23:17:00.987733 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988137 kubelet[3406]: E0423 23:17:00.987894 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988137 kubelet[3406]: W0423 23:17:00.987901 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988137 kubelet[3406]: E0423 23:17:00.987908 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988137 kubelet[3406]: E0423 23:17:00.988109 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988137 kubelet[3406]: W0423 23:17:00.988118 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988137 kubelet[3406]: E0423 23:17:00.988126 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988403 kubelet[3406]: E0423 23:17:00.988385 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988403 kubelet[3406]: W0423 23:17:00.988400 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988457 kubelet[3406]: E0423 23:17:00.988410 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988651 kubelet[3406]: E0423 23:17:00.988633 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988651 kubelet[3406]: W0423 23:17:00.988646 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988705 kubelet[3406]: E0423 23:17:00.988656 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988842 kubelet[3406]: E0423 23:17:00.988827 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988867 kubelet[3406]: W0423 23:17:00.988838 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.988867 kubelet[3406]: E0423 23:17:00.988856 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.988974 kubelet[3406]: E0423 23:17:00.988962 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.988974 kubelet[3406]: W0423 23:17:00.988971 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.989006 kubelet[3406]: E0423 23:17:00.988978 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.989097 kubelet[3406]: E0423 23:17:00.989086 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.989097 kubelet[3406]: W0423 23:17:00.989095 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.989137 kubelet[3406]: E0423 23:17:00.989102 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.989248 kubelet[3406]: E0423 23:17:00.989213 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.989400 kubelet[3406]: W0423 23:17:00.989222 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.989400 kubelet[3406]: E0423 23:17:00.989330 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.989644 kubelet[3406]: E0423 23:17:00.989624 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.989664 kubelet[3406]: W0423 23:17:00.989651 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.989664 kubelet[3406]: E0423 23:17:00.989661 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.989975 kubelet[3406]: E0423 23:17:00.989949 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.989975 kubelet[3406]: W0423 23:17:00.989967 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.989975 kubelet[3406]: E0423 23:17:00.989976 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.990443 kubelet[3406]: E0423 23:17:00.990405 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.990531 kubelet[3406]: W0423 23:17:00.990497 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.990531 kubelet[3406]: E0423 23:17:00.990519 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.991365 kubelet[3406]: E0423 23:17:00.991239 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.991365 kubelet[3406]: W0423 23:17:00.991253 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.991365 kubelet[3406]: E0423 23:17:00.991263 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.991593 kubelet[3406]: E0423 23:17:00.991527 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.991593 kubelet[3406]: W0423 23:17:00.991537 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.991593 kubelet[3406]: E0423 23:17:00.991548 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.991900 kubelet[3406]: E0423 23:17:00.991812 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.991900 kubelet[3406]: W0423 23:17:00.991821 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.991900 kubelet[3406]: E0423 23:17:00.991830 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992180 kubelet[3406]: E0423 23:17:00.992113 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992180 kubelet[3406]: W0423 23:17:00.992123 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.992180 kubelet[3406]: E0423 23:17:00.992132 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992460 kubelet[3406]: E0423 23:17:00.992438 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992460 kubelet[3406]: W0423 23:17:00.992458 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.992519 kubelet[3406]: E0423 23:17:00.992468 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992658 kubelet[3406]: E0423 23:17:00.992585 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992658 kubelet[3406]: W0423 23:17:00.992590 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.992658 kubelet[3406]: E0423 23:17:00.992597 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992748 kubelet[3406]: E0423 23:17:00.992728 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992748 kubelet[3406]: W0423 23:17:00.992737 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.992748 kubelet[3406]: E0423 23:17:00.992743 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992748 kubelet[3406]: E0423 23:17:00.992836 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992748 kubelet[3406]: W0423 23:17:00.992841 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.992748 kubelet[3406]: E0423 23:17:00.992845 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.992748 kubelet[3406]: E0423 23:17:00.992923 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.992748 kubelet[3406]: W0423 23:17:00.992926 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.993112 kubelet[3406]: E0423 23:17:00.992932 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.993112 kubelet[3406]: E0423 23:17:00.993006 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.993112 kubelet[3406]: W0423 23:17:00.993010 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.993112 kubelet[3406]: E0423 23:17:00.993014 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.993112 kubelet[3406]: E0423 23:17:00.993084 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.993112 kubelet[3406]: W0423 23:17:00.993088 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.993112 kubelet[3406]: E0423 23:17:00.993092 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.993441 kubelet[3406]: E0423 23:17:00.993186 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.993441 kubelet[3406]: W0423 23:17:00.993190 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.993441 kubelet[3406]: E0423 23:17:00.993195 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.993717 kubelet[3406]: E0423 23:17:00.993674 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.993767 kubelet[3406]: W0423 23:17:00.993758 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.993822 kubelet[3406]: E0423 23:17:00.993809 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.994056 kubelet[3406]: E0423 23:17:00.994024 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.994056 kubelet[3406]: W0423 23:17:00.994035 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.994056 kubelet[3406]: E0423 23:17:00.994044 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.994318 kubelet[3406]: E0423 23:17:00.994306 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.994457 kubelet[3406]: W0423 23:17:00.994377 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.994457 kubelet[3406]: E0423 23:17:00.994391 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.994895 kubelet[3406]: E0423 23:17:00.994746 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.994895 kubelet[3406]: W0423 23:17:00.994758 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.994895 kubelet[3406]: E0423 23:17:00.994768 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.995279 kubelet[3406]: E0423 23:17:00.995181 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.995279 kubelet[3406]: W0423 23:17:00.995205 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.995279 kubelet[3406]: E0423 23:17:00.995221 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:00.995743 kubelet[3406]: E0423 23:17:00.995675 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:17:00.995743 kubelet[3406]: W0423 23:17:00.995705 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:17:00.995743 kubelet[3406]: E0423 23:17:00.995716 3406 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:17:01.120355 containerd[1896]: time="2026-04-23T23:17:01.119107989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:01.122370 containerd[1896]: time="2026-04-23T23:17:01.122333754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4404646" Apr 23 23:17:01.126006 containerd[1896]: time="2026-04-23T23:17:01.125966021Z" level=info msg="ImageCreate event name:\"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:01.130536 containerd[1896]: time="2026-04-23T23:17:01.130494385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:01.130956 containerd[1896]: time="2026-04-23T23:17:01.130795788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"6980245\" in 1.361504979s" Apr 23 23:17:01.130956 containerd[1896]: time="2026-04-23T23:17:01.130822461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\"" Apr 23 23:17:01.139559 containerd[1896]: time="2026-04-23T23:17:01.139499512Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 23 23:17:01.159621 containerd[1896]: time="2026-04-23T23:17:01.158848077Z" level=info msg="Container 952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:01.161409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201192881.mount: Deactivated successfully. Apr 23 23:17:01.177767 containerd[1896]: time="2026-04-23T23:17:01.177715825Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141\"" Apr 23 23:17:01.179170 containerd[1896]: time="2026-04-23T23:17:01.179025585Z" level=info msg="StartContainer for \"952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141\"" Apr 23 23:17:01.182294 containerd[1896]: time="2026-04-23T23:17:01.182223980Z" level=info msg="connecting to shim 952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141" address="unix:///run/containerd/s/8cdfe13ca70b697ae96cc27c2b268c024f800dc37c09ee888d2f8ec0c2d5d240" protocol=ttrpc version=3 Apr 23 23:17:01.197826 systemd[1]: Started cri-containerd-952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141.scope - libcontainer container 952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141. Apr 23 23:17:01.246650 containerd[1896]: time="2026-04-23T23:17:01.245997980Z" level=info msg="StartContainer for \"952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141\" returns successfully" Apr 23 23:17:01.254346 systemd[1]: cri-containerd-952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141.scope: Deactivated successfully. Apr 23 23:17:01.258063 containerd[1896]: time="2026-04-23T23:17:01.258017768Z" level=info msg="received container exit event container_id:\"952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141\" id:\"952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141\" pid:4079 exited_at:{seconds:1776986221 nanos:257575936}" Apr 23 23:17:01.276513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-952ff26c7bb2d33d0e1bd596624e840d2210eb3a8f4df89fd94356bd2481c141-rootfs.mount: Deactivated successfully. Apr 23 23:17:02.861091 kubelet[3406]: E0423 23:17:02.860976 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:02.960281 containerd[1896]: time="2026-04-23T23:17:02.960241002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 23 23:17:04.861694 kubelet[3406]: E0423 23:17:04.861631 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:06.424915 kubelet[3406]: I0423 23:17:06.424607 3406 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:17:06.861620 kubelet[3406]: E0423 23:17:06.861357 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:08.504576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28743398.mount: Deactivated successfully. Apr 23 23:17:08.783381 containerd[1896]: time="2026-04-23T23:17:08.782802009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:08.786822 containerd[1896]: time="2026-04-23T23:17:08.786774334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=153029581" Apr 23 23:17:08.789987 containerd[1896]: time="2026-04-23T23:17:08.789809818Z" level=info msg="ImageCreate event name:\"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:08.793588 containerd[1896]: time="2026-04-23T23:17:08.793557879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:08.794093 containerd[1896]: time="2026-04-23T23:17:08.793832417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"153029443\" in 5.83355559s" Apr 23 23:17:08.794093 containerd[1896]: time="2026-04-23T23:17:08.793860786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\"" Apr 23 23:17:08.802405 containerd[1896]: time="2026-04-23T23:17:08.802353087Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 23 23:17:08.830377 containerd[1896]: time="2026-04-23T23:17:08.830326312Z" level=info msg="Container a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:08.850557 containerd[1896]: time="2026-04-23T23:17:08.850507461Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87\"" Apr 23 23:17:08.852612 containerd[1896]: time="2026-04-23T23:17:08.852321373Z" level=info msg="StartContainer for \"a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87\"" Apr 23 23:17:08.853596 containerd[1896]: time="2026-04-23T23:17:08.853572337Z" level=info msg="connecting to shim a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87" address="unix:///run/containerd/s/8cdfe13ca70b697ae96cc27c2b268c024f800dc37c09ee888d2f8ec0c2d5d240" protocol=ttrpc version=3 Apr 23 23:17:08.863355 kubelet[3406]: E0423 23:17:08.863316 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:08.878830 systemd[1]: Started cri-containerd-a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87.scope - libcontainer container a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87. Apr 23 23:17:08.930317 containerd[1896]: time="2026-04-23T23:17:08.930276836Z" level=info msg="StartContainer for \"a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87\" returns successfully" Apr 23 23:17:08.960161 systemd[1]: cri-containerd-a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87.scope: Deactivated successfully. Apr 23 23:17:08.962190 containerd[1896]: time="2026-04-23T23:17:08.962137615Z" level=info msg="received container exit event container_id:\"a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87\" id:\"a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87\" pid:4137 exited_at:{seconds:1776986228 nanos:961798539}" Apr 23 23:17:08.994936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a02f639505beaced9fb81ae99b0f88f05632231ab8301492823defa4ff91af87-rootfs.mount: Deactivated successfully. Apr 23 23:17:10.861696 kubelet[3406]: E0423 23:17:10.861640 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:11.001315 containerd[1896]: time="2026-04-23T23:17:11.001271996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 23 23:17:12.862966 kubelet[3406]: E0423 23:17:12.861847 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:14.139014 containerd[1896]: time="2026-04-23T23:17:14.138954349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:14.141759 containerd[1896]: time="2026-04-23T23:17:14.141729302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=62266008" Apr 23 23:17:14.145555 containerd[1896]: time="2026-04-23T23:17:14.145522540Z" level=info msg="ImageCreate event name:\"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:14.150150 containerd[1896]: time="2026-04-23T23:17:14.150113406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:14.150576 containerd[1896]: time="2026-04-23T23:17:14.150552534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"64841647\" in 3.149243808s" Apr 23 23:17:14.150604 containerd[1896]: time="2026-04-23T23:17:14.150583743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\"" Apr 23 23:17:14.159778 containerd[1896]: time="2026-04-23T23:17:14.159256281Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 23 23:17:14.178707 containerd[1896]: time="2026-04-23T23:17:14.177366135Z" level=info msg="Container 2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:14.196418 containerd[1896]: time="2026-04-23T23:17:14.196368509Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a\"" Apr 23 23:17:14.198662 containerd[1896]: time="2026-04-23T23:17:14.197371385Z" level=info msg="StartContainer for \"2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a\"" Apr 23 23:17:14.199620 containerd[1896]: time="2026-04-23T23:17:14.199574790Z" level=info msg="connecting to shim 2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a" address="unix:///run/containerd/s/8cdfe13ca70b697ae96cc27c2b268c024f800dc37c09ee888d2f8ec0c2d5d240" protocol=ttrpc version=3 Apr 23 23:17:14.221845 systemd[1]: Started cri-containerd-2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a.scope - libcontainer container 2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a. Apr 23 23:17:14.277894 containerd[1896]: time="2026-04-23T23:17:14.277810286Z" level=info msg="StartContainer for \"2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a\" returns successfully" Apr 23 23:17:14.861318 kubelet[3406]: E0423 23:17:14.860959 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:16.195891 systemd[1]: cri-containerd-2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a.scope: Deactivated successfully. Apr 23 23:17:16.196184 systemd[1]: cri-containerd-2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a.scope: Consumed 331ms CPU time, 183.5M memory peak, 165.6M written to disk. Apr 23 23:17:16.200470 containerd[1896]: time="2026-04-23T23:17:16.200427066Z" level=info msg="received container exit event container_id:\"2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a\" id:\"2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a\" pid:4196 exited_at:{seconds:1776986236 nanos:196442670}" Apr 23 23:17:16.224152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d7e353ad3c6e25d59bc84b06e5881a766221d0047d39fa4a86b06fae183a71a-rootfs.mount: Deactivated successfully. Apr 23 23:17:16.228542 kubelet[3406]: I0423 23:17:16.227984 3406 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 23 23:17:16.282547 systemd[1]: Created slice kubepods-burstable-pod18471ba6_9ba9_4c30_a917_4310db7d988d.slice - libcontainer container kubepods-burstable-pod18471ba6_9ba9_4c30_a917_4310db7d988d.slice. Apr 23 23:17:16.285589 kubelet[3406]: I0423 23:17:16.285556 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18471ba6-9ba9-4c30-a917-4310db7d988d-config-volume\") pod \"coredns-674b8bbfcf-6bwz4\" (UID: \"18471ba6-9ba9-4c30-a917-4310db7d988d\") " pod="kube-system/coredns-674b8bbfcf-6bwz4" Apr 23 23:17:16.285589 kubelet[3406]: I0423 23:17:16.285589 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjn6\" (UniqueName: \"kubernetes.io/projected/18471ba6-9ba9-4c30-a917-4310db7d988d-kube-api-access-wrjn6\") pod \"coredns-674b8bbfcf-6bwz4\" (UID: \"18471ba6-9ba9-4c30-a917-4310db7d988d\") " pod="kube-system/coredns-674b8bbfcf-6bwz4" Apr 23 23:17:16.285722 kubelet[3406]: I0423 23:17:16.285604 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxngc\" (UniqueName: \"kubernetes.io/projected/8665c021-af05-4797-988c-fc76f1cfacc5-kube-api-access-bxngc\") pod \"coredns-674b8bbfcf-p88vb\" (UID: \"8665c021-af05-4797-988c-fc76f1cfacc5\") " pod="kube-system/coredns-674b8bbfcf-p88vb" Apr 23 23:17:16.285722 kubelet[3406]: I0423 23:17:16.285620 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8665c021-af05-4797-988c-fc76f1cfacc5-config-volume\") pod \"coredns-674b8bbfcf-p88vb\" (UID: \"8665c021-af05-4797-988c-fc76f1cfacc5\") " pod="kube-system/coredns-674b8bbfcf-p88vb" Apr 23 23:17:16.295278 systemd[1]: Created slice kubepods-burstable-pod8665c021_af05_4797_988c_fc76f1cfacc5.slice - libcontainer container kubepods-burstable-pod8665c021_af05_4797_988c_fc76f1cfacc5.slice. Apr 23 23:17:16.302864 systemd[1]: Created slice kubepods-besteffort-podd38ee980_eebf_48ac_9c4c_b3424746be66.slice - libcontainer container kubepods-besteffort-podd38ee980_eebf_48ac_9c4c_b3424746be66.slice. Apr 23 23:17:16.310273 systemd[1]: Created slice kubepods-besteffort-pod023d6ef6_97f0_4853_ac4d_ffc0ba1b54d1.slice - libcontainer container kubepods-besteffort-pod023d6ef6_97f0_4853_ac4d_ffc0ba1b54d1.slice. Apr 23 23:17:16.317833 systemd[1]: Created slice kubepods-besteffort-pode4976756_45b2_48c8_ace8_10ebe7175081.slice - libcontainer container kubepods-besteffort-pode4976756_45b2_48c8_ace8_10ebe7175081.slice. Apr 23 23:17:16.322048 systemd[1]: Created slice kubepods-besteffort-pod58058497_bc8e_403e_b752_b973a96c5354.slice - libcontainer container kubepods-besteffort-pod58058497_bc8e_403e_b752_b973a96c5354.slice. Apr 23 23:17:16.329757 systemd[1]: Created slice kubepods-besteffort-pod07bc0a08_4eeb_4eef_b101_e431b1d35421.slice - libcontainer container kubepods-besteffort-pod07bc0a08_4eeb_4eef_b101_e431b1d35421.slice. Apr 23 23:17:16.385891 kubelet[3406]: I0423 23:17:16.385833 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58058497-bc8e-403e-b752-b973a96c5354-whisker-backend-key-pair\") pod \"whisker-6bf7ccf6d9-ww62z\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.385891 kubelet[3406]: I0423 23:17:16.385869 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d38ee980-eebf-48ac-9c4c-b3424746be66-goldmane-key-pair\") pod \"goldmane-57885fdd4c-ws8th\" (UID: \"d38ee980-eebf-48ac-9c4c-b3424746be66\") " pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.385891 kubelet[3406]: I0423 23:17:16.385889 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1-calico-apiserver-certs\") pod \"calico-apiserver-769c4fbb8-zqlhl\" (UID: \"023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1\") " pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" Apr 23 23:17:16.386128 kubelet[3406]: I0423 23:17:16.385918 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95trr\" (UniqueName: \"kubernetes.io/projected/07bc0a08-4eeb-4eef-b101-e431b1d35421-kube-api-access-95trr\") pod \"calico-apiserver-769c4fbb8-vjrjh\" (UID: \"07bc0a08-4eeb-4eef-b101-e431b1d35421\") " pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" Apr 23 23:17:16.386128 kubelet[3406]: I0423 23:17:16.385935 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-whisker-ca-bundle\") pod \"whisker-6bf7ccf6d9-ww62z\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.386128 kubelet[3406]: I0423 23:17:16.385950 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07bc0a08-4eeb-4eef-b101-e431b1d35421-calico-apiserver-certs\") pod \"calico-apiserver-769c4fbb8-vjrjh\" (UID: \"07bc0a08-4eeb-4eef-b101-e431b1d35421\") " pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" Apr 23 23:17:16.386128 kubelet[3406]: I0423 23:17:16.385959 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdgm\" (UniqueName: \"kubernetes.io/projected/58058497-bc8e-403e-b752-b973a96c5354-kube-api-access-stdgm\") pod \"whisker-6bf7ccf6d9-ww62z\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.386128 kubelet[3406]: I0423 23:17:16.385993 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zk56\" (UniqueName: \"kubernetes.io/projected/023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1-kube-api-access-8zk56\") pod \"calico-apiserver-769c4fbb8-zqlhl\" (UID: \"023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1\") " pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" Apr 23 23:17:16.386222 kubelet[3406]: I0423 23:17:16.386011 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38ee980-eebf-48ac-9c4c-b3424746be66-config\") pod \"goldmane-57885fdd4c-ws8th\" (UID: \"d38ee980-eebf-48ac-9c4c-b3424746be66\") " pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.386222 kubelet[3406]: I0423 23:17:16.386022 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89sd\" (UniqueName: \"kubernetes.io/projected/d38ee980-eebf-48ac-9c4c-b3424746be66-kube-api-access-d89sd\") pod \"goldmane-57885fdd4c-ws8th\" (UID: \"d38ee980-eebf-48ac-9c4c-b3424746be66\") " pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.386222 kubelet[3406]: I0423 23:17:16.386033 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-nginx-config\") pod \"whisker-6bf7ccf6d9-ww62z\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.386222 kubelet[3406]: I0423 23:17:16.386045 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cr2\" (UniqueName: \"kubernetes.io/projected/e4976756-45b2-48c8-ace8-10ebe7175081-kube-api-access-x6cr2\") pod \"calico-kube-controllers-544dbfb45c-sgmsk\" (UID: \"e4976756-45b2-48c8-ace8-10ebe7175081\") " pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" Apr 23 23:17:16.386222 kubelet[3406]: I0423 23:17:16.386055 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d38ee980-eebf-48ac-9c4c-b3424746be66-goldmane-ca-bundle\") pod \"goldmane-57885fdd4c-ws8th\" (UID: \"d38ee980-eebf-48ac-9c4c-b3424746be66\") " pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.386297 kubelet[3406]: I0423 23:17:16.386074 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4976756-45b2-48c8-ace8-10ebe7175081-tigera-ca-bundle\") pod \"calico-kube-controllers-544dbfb45c-sgmsk\" (UID: \"e4976756-45b2-48c8-ace8-10ebe7175081\") " pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" Apr 23 23:17:16.589743 containerd[1896]: time="2026-04-23T23:17:16.589394928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bwz4,Uid:18471ba6-9ba9-4c30-a917-4310db7d988d,Namespace:kube-system,Attempt:0,}" Apr 23 23:17:16.601264 containerd[1896]: time="2026-04-23T23:17:16.601218833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p88vb,Uid:8665c021-af05-4797-988c-fc76f1cfacc5,Namespace:kube-system,Attempt:0,}" Apr 23 23:17:16.614972 containerd[1896]: time="2026-04-23T23:17:16.614913524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-ws8th,Uid:d38ee980-eebf-48ac-9c4c-b3424746be66,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.616697 containerd[1896]: time="2026-04-23T23:17:16.616647089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-zqlhl,Uid:023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.623284 containerd[1896]: time="2026-04-23T23:17:16.623111669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544dbfb45c-sgmsk,Uid:e4976756-45b2-48c8-ace8-10ebe7175081,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.627426 containerd[1896]: time="2026-04-23T23:17:16.627400788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf7ccf6d9-ww62z,Uid:58058497-bc8e-403e-b752-b973a96c5354,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.633891 containerd[1896]: time="2026-04-23T23:17:16.633863840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-vjrjh,Uid:07bc0a08-4eeb-4eef-b101-e431b1d35421,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.749856 containerd[1896]: time="2026-04-23T23:17:16.749740071Z" level=error msg="Failed to destroy network for sandbox \"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.750433 containerd[1896]: time="2026-04-23T23:17:16.750363388Z" level=error msg="Failed to destroy network for sandbox \"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.751275 containerd[1896]: time="2026-04-23T23:17:16.751252492Z" level=error msg="Failed to destroy network for sandbox \"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.754539 containerd[1896]: time="2026-04-23T23:17:16.754115921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bwz4,Uid:18471ba6-9ba9-4c30-a917-4310db7d988d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.755366 kubelet[3406]: E0423 23:17:16.755257 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.755366 kubelet[3406]: E0423 23:17:16.755335 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6bwz4" Apr 23 23:17:16.755366 kubelet[3406]: E0423 23:17:16.755352 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6bwz4" Apr 23 23:17:16.755617 kubelet[3406]: E0423 23:17:16.755404 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6bwz4_kube-system(18471ba6-9ba9-4c30-a917-4310db7d988d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6bwz4_kube-system(18471ba6-9ba9-4c30-a917-4310db7d988d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9535a2fb3e41f0fa9fe7b605bea91714565d708d6b1c73d9d743062b2c920a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6bwz4" podUID="18471ba6-9ba9-4c30-a917-4310db7d988d" Apr 23 23:17:16.758798 containerd[1896]: time="2026-04-23T23:17:16.758762605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p88vb,Uid:8665c021-af05-4797-988c-fc76f1cfacc5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.759866 kubelet[3406]: E0423 23:17:16.759177 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.759866 kubelet[3406]: E0423 23:17:16.759851 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p88vb" Apr 23 23:17:16.759954 kubelet[3406]: E0423 23:17:16.759873 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p88vb" Apr 23 23:17:16.759954 kubelet[3406]: E0423 23:17:16.759911 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p88vb_kube-system(8665c021-af05-4797-988c-fc76f1cfacc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p88vb_kube-system(8665c021-af05-4797-988c-fc76f1cfacc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea3a9be132637c30aeea6459264fc09cdf9cbc91c29147ecda5143a7ba64b6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p88vb" podUID="8665c021-af05-4797-988c-fc76f1cfacc5" Apr 23 23:17:16.763934 containerd[1896]: time="2026-04-23T23:17:16.763831367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-ws8th,Uid:d38ee980-eebf-48ac-9c4c-b3424746be66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.764116 kubelet[3406]: E0423 23:17:16.764022 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.764116 kubelet[3406]: E0423 23:17:16.764112 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.764185 kubelet[3406]: E0423 23:17:16.764128 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-ws8th" Apr 23 23:17:16.764185 kubelet[3406]: E0423 23:17:16.764172 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-57885fdd4c-ws8th_calico-system(d38ee980-eebf-48ac-9c4c-b3424746be66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-57885fdd4c-ws8th_calico-system(d38ee980-eebf-48ac-9c4c-b3424746be66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f33c8aa61c40b598e357c411168440beee30af3202923679800b09b9346b98c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-57885fdd4c-ws8th" podUID="d38ee980-eebf-48ac-9c4c-b3424746be66" Apr 23 23:17:16.789096 containerd[1896]: time="2026-04-23T23:17:16.788963446Z" level=error msg="Failed to destroy network for sandbox \"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.795204 containerd[1896]: time="2026-04-23T23:17:16.795083574Z" level=error msg="Failed to destroy network for sandbox \"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.795509 containerd[1896]: time="2026-04-23T23:17:16.795475731Z" level=error msg="Failed to destroy network for sandbox \"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.795674 containerd[1896]: time="2026-04-23T23:17:16.795647770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-zqlhl,Uid:023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.796901 kubelet[3406]: E0423 23:17:16.796845 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.796983 kubelet[3406]: E0423 23:17:16.796920 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" Apr 23 23:17:16.796983 kubelet[3406]: E0423 23:17:16.796936 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" Apr 23 23:17:16.797029 kubelet[3406]: E0423 23:17:16.796985 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769c4fbb8-zqlhl_calico-system(023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769c4fbb8-zqlhl_calico-system(023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43ed210cdc08a5e61d9165e97ee46bee5cddac45d82a64605dad7b3df4ca5a61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" podUID="023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1" Apr 23 23:17:16.798851 containerd[1896]: time="2026-04-23T23:17:16.798755895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544dbfb45c-sgmsk,Uid:e4976756-45b2-48c8-ace8-10ebe7175081,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.799073 kubelet[3406]: E0423 23:17:16.799025 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.799073 kubelet[3406]: E0423 23:17:16.799063 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" Apr 23 23:17:16.799073 kubelet[3406]: E0423 23:17:16.799078 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" Apr 23 23:17:16.799290 kubelet[3406]: E0423 23:17:16.799113 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-544dbfb45c-sgmsk_calico-system(e4976756-45b2-48c8-ace8-10ebe7175081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-544dbfb45c-sgmsk_calico-system(e4976756-45b2-48c8-ace8-10ebe7175081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6bfa21dccfde4280554fa62a54d4d495cfaa1647e416a7def19307dd96417f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" podUID="e4976756-45b2-48c8-ace8-10ebe7175081" Apr 23 23:17:16.801995 containerd[1896]: time="2026-04-23T23:17:16.801882213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-vjrjh,Uid:07bc0a08-4eeb-4eef-b101-e431b1d35421,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.802093 kubelet[3406]: E0423 23:17:16.802058 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.802127 kubelet[3406]: E0423 23:17:16.802108 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" Apr 23 23:17:16.802127 kubelet[3406]: E0423 23:17:16.802121 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" Apr 23 23:17:16.802920 kubelet[3406]: E0423 23:17:16.802156 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769c4fbb8-vjrjh_calico-system(07bc0a08-4eeb-4eef-b101-e431b1d35421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769c4fbb8-vjrjh_calico-system(07bc0a08-4eeb-4eef-b101-e431b1d35421)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4988737f9448d0c46ea048fab16cfd9d39d2d7481627fe797b0cbc3c9d6947a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" podUID="07bc0a08-4eeb-4eef-b101-e431b1d35421" Apr 23 23:17:16.805352 containerd[1896]: time="2026-04-23T23:17:16.805319687Z" level=error msg="Failed to destroy network for sandbox \"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.808543 containerd[1896]: time="2026-04-23T23:17:16.808437637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf7ccf6d9-ww62z,Uid:58058497-bc8e-403e-b752-b973a96c5354,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.808850 kubelet[3406]: E0423 23:17:16.808724 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.808850 kubelet[3406]: E0423 23:17:16.808762 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.808850 kubelet[3406]: E0423 23:17:16.808775 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf7ccf6d9-ww62z" Apr 23 23:17:16.808960 kubelet[3406]: E0423 23:17:16.808814 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bf7ccf6d9-ww62z_calico-system(58058497-bc8e-403e-b752-b973a96c5354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bf7ccf6d9-ww62z_calico-system(58058497-bc8e-403e-b752-b973a96c5354)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1470c369d32ea6f51143c3e6a2869a84b8a22f1f5c60176042c66082ae3b56a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bf7ccf6d9-ww62z" podUID="58058497-bc8e-403e-b752-b973a96c5354" Apr 23 23:17:16.866572 systemd[1]: Created slice kubepods-besteffort-poddc579ba9_ac2c_4e73_9991_4e4cc7b4cece.slice - libcontainer container kubepods-besteffort-poddc579ba9_ac2c_4e73_9991_4e4cc7b4cece.slice. Apr 23 23:17:16.870110 containerd[1896]: time="2026-04-23T23:17:16.869672548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd28b,Uid:dc579ba9-ac2c-4e73-9991-4e4cc7b4cece,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:16.916213 containerd[1896]: time="2026-04-23T23:17:16.916162348Z" level=error msg="Failed to destroy network for sandbox \"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.920120 containerd[1896]: time="2026-04-23T23:17:16.920057741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd28b,Uid:dc579ba9-ac2c-4e73-9991-4e4cc7b4cece,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.920416 kubelet[3406]: E0423 23:17:16.920382 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:17:16.920572 kubelet[3406]: E0423 23:17:16.920533 3406 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd28b" Apr 23 23:17:16.920753 kubelet[3406]: E0423 23:17:16.920555 3406 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd28b" Apr 23 23:17:16.920840 kubelet[3406]: E0423 23:17:16.920816 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd28b_calico-system(dc579ba9-ac2c-4e73-9991-4e4cc7b4cece)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd28b_calico-system(dc579ba9-ac2c-4e73-9991-4e4cc7b4cece)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"525f8cd2f4646e3f3ae5806412838df23c4745762a753c4e87c3f070828d49a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd28b" podUID="dc579ba9-ac2c-4e73-9991-4e4cc7b4cece" Apr 23 23:17:17.035069 containerd[1896]: time="2026-04-23T23:17:17.035020155Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 23 23:17:17.072142 containerd[1896]: time="2026-04-23T23:17:17.072089831Z" level=info msg="Container e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:17.093558 containerd[1896]: time="2026-04-23T23:17:17.093505586Z" level=info msg="CreateContainer within sandbox \"3dbd907ca20e10f976994c3e42706fd1fc9dc54c24228ff4d4998b2b3e351b24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0\"" Apr 23 23:17:17.094705 containerd[1896]: time="2026-04-23T23:17:17.094188754Z" level=info msg="StartContainer for \"e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0\"" Apr 23 23:17:17.095548 containerd[1896]: time="2026-04-23T23:17:17.095510361Z" level=info msg="connecting to shim e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0" address="unix:///run/containerd/s/8cdfe13ca70b697ae96cc27c2b268c024f800dc37c09ee888d2f8ec0c2d5d240" protocol=ttrpc version=3 Apr 23 23:17:17.113842 systemd[1]: Started cri-containerd-e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0.scope - libcontainer container e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0. Apr 23 23:17:17.169765 containerd[1896]: time="2026-04-23T23:17:17.169581949Z" level=info msg="StartContainer for \"e103958ed07a2364aa10ace0c7954839b18374dbd0a168e0b2e0a1cda05b87d0\" returns successfully" Apr 23 23:17:17.493650 kubelet[3406]: I0423 23:17:17.492978 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58058497-bc8e-403e-b752-b973a96c5354-whisker-backend-key-pair\") pod \"58058497-bc8e-403e-b752-b973a96c5354\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " Apr 23 23:17:17.493650 kubelet[3406]: I0423 23:17:17.493030 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stdgm\" (UniqueName: \"kubernetes.io/projected/58058497-bc8e-403e-b752-b973a96c5354-kube-api-access-stdgm\") pod \"58058497-bc8e-403e-b752-b973a96c5354\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " Apr 23 23:17:17.493650 kubelet[3406]: I0423 23:17:17.493043 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-nginx-config\") pod \"58058497-bc8e-403e-b752-b973a96c5354\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " Apr 23 23:17:17.493650 kubelet[3406]: I0423 23:17:17.493059 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-whisker-ca-bundle\") pod \"58058497-bc8e-403e-b752-b973a96c5354\" (UID: \"58058497-bc8e-403e-b752-b973a96c5354\") " Apr 23 23:17:17.493650 kubelet[3406]: I0423 23:17:17.493328 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "58058497-bc8e-403e-b752-b973a96c5354" (UID: "58058497-bc8e-403e-b752-b973a96c5354"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:17:17.496314 systemd[1]: var-lib-kubelet-pods-58058497\x2dbc8e\x2d403e\x2db752\x2db973a96c5354-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 23 23:17:17.497083 kubelet[3406]: I0423 23:17:17.497057 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "58058497-bc8e-403e-b752-b973a96c5354" (UID: "58058497-bc8e-403e-b752-b973a96c5354"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:17:17.499304 systemd[1]: var-lib-kubelet-pods-58058497\x2dbc8e\x2d403e\x2db752\x2db973a96c5354-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstdgm.mount: Deactivated successfully. Apr 23 23:17:17.500165 kubelet[3406]: I0423 23:17:17.499727 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58058497-bc8e-403e-b752-b973a96c5354-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "58058497-bc8e-403e-b752-b973a96c5354" (UID: "58058497-bc8e-403e-b752-b973a96c5354"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 23:17:17.500784 kubelet[3406]: I0423 23:17:17.500760 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58058497-bc8e-403e-b752-b973a96c5354-kube-api-access-stdgm" (OuterVolumeSpecName: "kube-api-access-stdgm") pod "58058497-bc8e-403e-b752-b973a96c5354" (UID: "58058497-bc8e-403e-b752-b973a96c5354"). InnerVolumeSpecName "kube-api-access-stdgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:17:17.594055 kubelet[3406]: I0423 23:17:17.594012 3406 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-stdgm\" (UniqueName: \"kubernetes.io/projected/58058497-bc8e-403e-b752-b973a96c5354-kube-api-access-stdgm\") on node \"ci-4459.2.4-n-357a044314\" DevicePath \"\"" Apr 23 23:17:17.594055 kubelet[3406]: I0423 23:17:17.594045 3406 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-nginx-config\") on node \"ci-4459.2.4-n-357a044314\" DevicePath \"\"" Apr 23 23:17:17.594055 kubelet[3406]: I0423 23:17:17.594052 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58058497-bc8e-403e-b752-b973a96c5354-whisker-ca-bundle\") on node \"ci-4459.2.4-n-357a044314\" DevicePath \"\"" Apr 23 23:17:17.594055 kubelet[3406]: I0423 23:17:17.594065 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58058497-bc8e-403e-b752-b973a96c5354-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-357a044314\" DevicePath \"\"" Apr 23 23:17:18.029627 systemd[1]: Removed slice kubepods-besteffort-pod58058497_bc8e_403e_b752_b973a96c5354.slice - libcontainer container kubepods-besteffort-pod58058497_bc8e_403e_b752_b973a96c5354.slice. Apr 23 23:17:18.061083 kubelet[3406]: I0423 23:17:18.060981 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b7n26" podStartSLOduration=4.773067291 podStartE2EDuration="21.060967783s" podCreationTimestamp="2026-04-23 23:16:57 +0000 UTC" firstStartedPulling="2026-04-23 23:16:57.863379219 +0000 UTC m=+19.151146079" lastFinishedPulling="2026-04-23 23:17:14.151279711 +0000 UTC m=+35.439046571" observedRunningTime="2026-04-23 23:17:18.045807864 +0000 UTC m=+39.333574732" watchObservedRunningTime="2026-04-23 23:17:18.060967783 +0000 UTC m=+39.348734643" Apr 23 23:17:18.132312 systemd[1]: Created slice kubepods-besteffort-podcac6def7_9646_4b82_8404_d619b81175a5.slice - libcontainer container kubepods-besteffort-podcac6def7_9646_4b82_8404_d619b81175a5.slice. Apr 23 23:17:18.198150 kubelet[3406]: I0423 23:17:18.197540 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cac6def7-9646-4b82-8404-d619b81175a5-nginx-config\") pod \"whisker-6b9c8bbf45-wqmj2\" (UID: \"cac6def7-9646-4b82-8404-d619b81175a5\") " pod="calico-system/whisker-6b9c8bbf45-wqmj2" Apr 23 23:17:18.198150 kubelet[3406]: I0423 23:17:18.197584 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac6def7-9646-4b82-8404-d619b81175a5-whisker-ca-bundle\") pod \"whisker-6b9c8bbf45-wqmj2\" (UID: \"cac6def7-9646-4b82-8404-d619b81175a5\") " pod="calico-system/whisker-6b9c8bbf45-wqmj2" Apr 23 23:17:18.198150 kubelet[3406]: I0423 23:17:18.197666 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbcl\" (UniqueName: \"kubernetes.io/projected/cac6def7-9646-4b82-8404-d619b81175a5-kube-api-access-jvbcl\") pod \"whisker-6b9c8bbf45-wqmj2\" (UID: \"cac6def7-9646-4b82-8404-d619b81175a5\") " pod="calico-system/whisker-6b9c8bbf45-wqmj2" Apr 23 23:17:18.198150 kubelet[3406]: I0423 23:17:18.197727 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cac6def7-9646-4b82-8404-d619b81175a5-whisker-backend-key-pair\") pod \"whisker-6b9c8bbf45-wqmj2\" (UID: \"cac6def7-9646-4b82-8404-d619b81175a5\") " pod="calico-system/whisker-6b9c8bbf45-wqmj2" Apr 23 23:17:18.437475 containerd[1896]: time="2026-04-23T23:17:18.437127954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9c8bbf45-wqmj2,Uid:cac6def7-9646-4b82-8404-d619b81175a5,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:18.625113 systemd-networkd[1464]: cali7c574bcac0c: Link UP Apr 23 23:17:18.627881 systemd-networkd[1464]: cali7c574bcac0c: Gained carrier Apr 23 23:17:18.649322 containerd[1896]: 2026-04-23 23:17:18.465 [ERROR][4505] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:17:18.649322 containerd[1896]: 2026-04-23 23:17:18.477 [INFO][4505] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0 whisker-6b9c8bbf45- calico-system cac6def7-9646-4b82-8404-d619b81175a5 926 0 2026-04-23 23:17:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b9c8bbf45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 whisker-6b9c8bbf45-wqmj2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7c574bcac0c [] [] }} ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-" Apr 23 23:17:18.649322 containerd[1896]: 2026-04-23 23:17:18.477 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.649322 containerd[1896]: 2026-04-23 23:17:18.509 [INFO][4517] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" HandleID="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Workload="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.515 [INFO][4517] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" HandleID="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Workload="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ef410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"whisker-6b9c8bbf45-wqmj2", "timestamp":"2026-04-23 23:17:18.509436411 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000260dc0)} Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.515 [INFO][4517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.516 [INFO][4517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.516 [INFO][4517] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.518 [INFO][4517] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.523 [INFO][4517] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.528 [INFO][4517] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.530 [INFO][4517] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649514 containerd[1896]: 2026-04-23 23:17:18.534 [INFO][4517] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.534 [INFO][4517] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.537 [INFO][4517] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.549 [INFO][4517] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.564 [INFO][4517] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.193/26] block=192.168.58.192/26 handle="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.564 [INFO][4517] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.193/26] handle="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.564 [INFO][4517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:18.649646 containerd[1896]: 2026-04-23 23:17:18.564 [INFO][4517] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.193/26] IPv6=[] ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" HandleID="k8s-pod-network.c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Workload="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.651457 containerd[1896]: 2026-04-23 23:17:18.571 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0", GenerateName:"whisker-6b9c8bbf45-", Namespace:"calico-system", SelfLink:"", UID:"cac6def7-9646-4b82-8404-d619b81175a5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b9c8bbf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"whisker-6b9c8bbf45-wqmj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c574bcac0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:18.651457 containerd[1896]: 2026-04-23 23:17:18.571 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.193/32] ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.651523 containerd[1896]: 2026-04-23 23:17:18.571 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c574bcac0c ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.651523 containerd[1896]: 2026-04-23 23:17:18.626 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.651552 containerd[1896]: 2026-04-23 23:17:18.628 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0", GenerateName:"whisker-6b9c8bbf45-", Namespace:"calico-system", SelfLink:"", UID:"cac6def7-9646-4b82-8404-d619b81175a5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b9c8bbf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd", Pod:"whisker-6b9c8bbf45-wqmj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c574bcac0c", MAC:"ea:62:13:02:19:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:18.651586 containerd[1896]: 2026-04-23 23:17:18.645 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" Namespace="calico-system" Pod="whisker-6b9c8bbf45-wqmj2" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-whisker--6b9c8bbf45--wqmj2-eth0" Apr 23 23:17:18.709227 containerd[1896]: time="2026-04-23T23:17:18.708786264Z" level=info msg="connecting to shim c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd" address="unix:///run/containerd/s/b13e8afdc09dc9662fe7938cbe80c80f84d408e6450f307b2dedbedf38d092f9" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:18.742558 systemd[1]: Started cri-containerd-c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd.scope - libcontainer container c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd. Apr 23 23:17:18.807564 containerd[1896]: time="2026-04-23T23:17:18.807517437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9c8bbf45-wqmj2,Uid:cac6def7-9646-4b82-8404-d619b81175a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd\"" Apr 23 23:17:18.810795 containerd[1896]: time="2026-04-23T23:17:18.810759577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 23 23:17:18.864688 kubelet[3406]: I0423 23:17:18.864604 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58058497-bc8e-403e-b752-b973a96c5354" path="/var/lib/kubelet/pods/58058497-bc8e-403e-b752-b973a96c5354/volumes" Apr 23 23:17:19.405006 systemd-networkd[1464]: vxlan.calico: Link UP Apr 23 23:17:19.406387 systemd-networkd[1464]: vxlan.calico: Gained carrier Apr 23 23:17:20.063918 systemd-networkd[1464]: cali7c574bcac0c: Gained IPv6LL Apr 23 23:17:20.442510 containerd[1896]: time="2026-04-23T23:17:20.442259429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:20.445162 containerd[1896]: time="2026-04-23T23:17:20.445127636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=5896864" Apr 23 23:17:20.447914 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Apr 23 23:17:20.449649 containerd[1896]: time="2026-04-23T23:17:20.449053401Z" level=info msg="ImageCreate event name:\"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:20.454440 containerd[1896]: time="2026-04-23T23:17:20.454401664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:20.455061 containerd[1896]: time="2026-04-23T23:17:20.455038151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"8472495\" in 1.644238837s" Apr 23 23:17:20.455119 containerd[1896]: time="2026-04-23T23:17:20.455066184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\"" Apr 23 23:17:20.469176 containerd[1896]: time="2026-04-23T23:17:20.469137433Z" level=info msg="CreateContainer within sandbox \"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 23 23:17:20.485659 containerd[1896]: time="2026-04-23T23:17:20.485066716Z" level=info msg="Container 1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:20.508015 containerd[1896]: time="2026-04-23T23:17:20.507946457Z" level=info msg="CreateContainer within sandbox \"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1\"" Apr 23 23:17:20.511235 containerd[1896]: time="2026-04-23T23:17:20.510946748Z" level=info msg="StartContainer for \"1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1\"" Apr 23 23:17:20.512451 containerd[1896]: time="2026-04-23T23:17:20.512425025Z" level=info msg="connecting to shim 1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1" address="unix:///run/containerd/s/b13e8afdc09dc9662fe7938cbe80c80f84d408e6450f307b2dedbedf38d092f9" protocol=ttrpc version=3 Apr 23 23:17:20.531841 systemd[1]: Started cri-containerd-1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1.scope - libcontainer container 1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1. Apr 23 23:17:20.567988 containerd[1896]: time="2026-04-23T23:17:20.567914007Z" level=info msg="StartContainer for \"1a66a431639a4064e93dac26563bf29feb0dbe6ae088cf6387efe9009ad736f1\" returns successfully" Apr 23 23:17:20.569555 containerd[1896]: time="2026-04-23T23:17:20.569349651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 23 23:17:22.337597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194994263.mount: Deactivated successfully. Apr 23 23:17:22.392953 containerd[1896]: time="2026-04-23T23:17:22.392894278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:22.395902 containerd[1896]: time="2026-04-23T23:17:22.395847080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=15624823" Apr 23 23:17:22.399313 containerd[1896]: time="2026-04-23T23:17:22.399044859Z" level=info msg="ImageCreate event name:\"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:22.403146 containerd[1896]: time="2026-04-23T23:17:22.403101460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:22.403628 containerd[1896]: time="2026-04-23T23:17:22.403595662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"15624653\" in 1.83421301s" Apr 23 23:17:22.403813 containerd[1896]: time="2026-04-23T23:17:22.403722098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\"" Apr 23 23:17:22.411359 containerd[1896]: time="2026-04-23T23:17:22.411320963Z" level=info msg="CreateContainer within sandbox \"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 23 23:17:22.431491 containerd[1896]: time="2026-04-23T23:17:22.430895425Z" level=info msg="Container 281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:22.447502 containerd[1896]: time="2026-04-23T23:17:22.447422698Z" level=info msg="CreateContainer within sandbox \"c489e4863bd0b7675299b4763dab0fd02e98beea4068a2bd452a4db08f8fcefd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae\"" Apr 23 23:17:22.448294 containerd[1896]: time="2026-04-23T23:17:22.448271056Z" level=info msg="StartContainer for \"281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae\"" Apr 23 23:17:22.450343 containerd[1896]: time="2026-04-23T23:17:22.450295321Z" level=info msg="connecting to shim 281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae" address="unix:///run/containerd/s/b13e8afdc09dc9662fe7938cbe80c80f84d408e6450f307b2dedbedf38d092f9" protocol=ttrpc version=3 Apr 23 23:17:22.470825 systemd[1]: Started cri-containerd-281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae.scope - libcontainer container 281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae. Apr 23 23:17:22.506224 containerd[1896]: time="2026-04-23T23:17:22.506172269Z" level=info msg="StartContainer for \"281ac96640feab2f114da5de82c7e4be072369e63707923cf5f17c649e16eeae\" returns successfully" Apr 23 23:17:23.063455 kubelet[3406]: I0423 23:17:23.063374 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b9c8bbf45-wqmj2" podStartSLOduration=1.468320126 podStartE2EDuration="5.063356666s" podCreationTimestamp="2026-04-23 23:17:18 +0000 UTC" firstStartedPulling="2026-04-23 23:17:18.809441202 +0000 UTC m=+40.097208062" lastFinishedPulling="2026-04-23 23:17:22.404477741 +0000 UTC m=+43.692244602" observedRunningTime="2026-04-23 23:17:23.063175387 +0000 UTC m=+44.350942279" watchObservedRunningTime="2026-04-23 23:17:23.063356666 +0000 UTC m=+44.351123526" Apr 23 23:17:27.861412 containerd[1896]: time="2026-04-23T23:17:27.861273923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544dbfb45c-sgmsk,Uid:e4976756-45b2-48c8-ace8-10ebe7175081,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:27.958144 systemd-networkd[1464]: calidab217662ac: Link UP Apr 23 23:17:27.959074 systemd-networkd[1464]: calidab217662ac: Gained carrier Apr 23 23:17:27.976946 containerd[1896]: 2026-04-23 23:17:27.898 [INFO][4941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0 calico-kube-controllers-544dbfb45c- calico-system e4976756-45b2-48c8-ace8-10ebe7175081 872 0 2026-04-23 23:16:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:544dbfb45c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 calico-kube-controllers-544dbfb45c-sgmsk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidab217662ac [] [] }} ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-" Apr 23 23:17:27.976946 containerd[1896]: 2026-04-23 23:17:27.899 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.976946 containerd[1896]: 2026-04-23 23:17:27.918 [INFO][4952] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" HandleID="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Workload="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.923 [INFO][4952] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" HandleID="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Workload="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffa00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"calico-kube-controllers-544dbfb45c-sgmsk", "timestamp":"2026-04-23 23:17:27.918080007 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000386f20)} Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.923 [INFO][4952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.923 [INFO][4952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.923 [INFO][4952] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.925 [INFO][4952] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.928 [INFO][4952] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.931 [INFO][4952] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.934 [INFO][4952] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977156 containerd[1896]: 2026-04-23 23:17:27.935 [INFO][4952] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.935 [INFO][4952] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.937 [INFO][4952] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.942 [INFO][4952] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.952 [INFO][4952] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.194/26] block=192.168.58.192/26 handle="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.952 [INFO][4952] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.194/26] handle="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.952 [INFO][4952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:27.977291 containerd[1896]: 2026-04-23 23:17:27.952 [INFO][4952] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.194/26] IPv6=[] ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" HandleID="k8s-pod-network.49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Workload="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.977431 containerd[1896]: 2026-04-23 23:17:27.954 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0", GenerateName:"calico-kube-controllers-544dbfb45c-", Namespace:"calico-system", SelfLink:"", UID:"e4976756-45b2-48c8-ace8-10ebe7175081", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544dbfb45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"calico-kube-controllers-544dbfb45c-sgmsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab217662ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:27.977467 containerd[1896]: 2026-04-23 23:17:27.954 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.194/32] ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.977467 containerd[1896]: 2026-04-23 23:17:27.954 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidab217662ac ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.977467 containerd[1896]: 2026-04-23 23:17:27.959 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:27.977511 containerd[1896]: 2026-04-23 23:17:27.961 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0", GenerateName:"calico-kube-controllers-544dbfb45c-", Namespace:"calico-system", SelfLink:"", UID:"e4976756-45b2-48c8-ace8-10ebe7175081", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544dbfb45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f", Pod:"calico-kube-controllers-544dbfb45c-sgmsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab217662ac", MAC:"52:7c:65:fa:5b:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:27.977545 containerd[1896]: 2026-04-23 23:17:27.975 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" Namespace="calico-system" Pod="calico-kube-controllers-544dbfb45c-sgmsk" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--kube--controllers--544dbfb45c--sgmsk-eth0" Apr 23 23:17:28.041117 containerd[1896]: time="2026-04-23T23:17:28.041079165Z" level=info msg="connecting to shim 49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f" address="unix:///run/containerd/s/94992db7e2ba0f388910592119143d36418b25c7e494d3da5917306c662af358" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:28.064843 systemd[1]: Started cri-containerd-49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f.scope - libcontainer container 49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f. Apr 23 23:17:28.099656 containerd[1896]: time="2026-04-23T23:17:28.099604818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544dbfb45c-sgmsk,Uid:e4976756-45b2-48c8-ace8-10ebe7175081,Namespace:calico-system,Attempt:0,} returns sandbox id \"49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f\"" Apr 23 23:17:28.101195 containerd[1896]: time="2026-04-23T23:17:28.101158934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 23 23:17:28.864466 containerd[1896]: time="2026-04-23T23:17:28.864273072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd28b,Uid:dc579ba9-ac2c-4e73-9991-4e4cc7b4cece,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:28.865124 containerd[1896]: time="2026-04-23T23:17:28.864972751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-ws8th,Uid:d38ee980-eebf-48ac-9c4c-b3424746be66,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:28.999944 systemd-networkd[1464]: cali62a00f91b2b: Link UP Apr 23 23:17:29.000745 systemd-networkd[1464]: cali62a00f91b2b: Gained carrier Apr 23 23:17:29.020131 containerd[1896]: 2026-04-23 23:17:28.915 [INFO][5016] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0 csi-node-driver- calico-system dc579ba9-ac2c-4e73-9991-4e4cc7b4cece 725 0 2026-04-23 23:16:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:74865c565 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 csi-node-driver-jd28b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali62a00f91b2b [] [] }} ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-" Apr 23 23:17:29.020131 containerd[1896]: 2026-04-23 23:17:28.916 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.020131 containerd[1896]: 2026-04-23 23:17:28.951 [INFO][5041] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" HandleID="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Workload="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.961 [INFO][5041] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" HandleID="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Workload="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffa00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"csi-node-driver-jd28b", "timestamp":"2026-04-23 23:17:28.951746517 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b5080)} Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.961 [INFO][5041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.961 [INFO][5041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.961 [INFO][5041] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.963 [INFO][5041] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.966 [INFO][5041] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.970 [INFO][5041] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.971 [INFO][5041] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020675 containerd[1896]: 2026-04-23 23:17:28.972 [INFO][5041] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.972 [INFO][5041] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.974 [INFO][5041] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72 Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.982 [INFO][5041] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.991 [INFO][5041] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.195/26] block=192.168.58.192/26 handle="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.992 [INFO][5041] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.195/26] handle="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.992 [INFO][5041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:29.020937 containerd[1896]: 2026-04-23 23:17:28.992 [INFO][5041] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.195/26] IPv6=[] ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" HandleID="k8s-pod-network.633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Workload="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.021038 containerd[1896]: 2026-04-23 23:17:28.996 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"csi-node-driver-jd28b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali62a00f91b2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:29.021075 containerd[1896]: 2026-04-23 23:17:28.996 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.195/32] ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.021075 containerd[1896]: 2026-04-23 23:17:28.996 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62a00f91b2b ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.021075 containerd[1896]: 2026-04-23 23:17:29.000 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.021119 containerd[1896]: 2026-04-23 23:17:29.001 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc579ba9-ac2c-4e73-9991-4e4cc7b4cece", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72", Pod:"csi-node-driver-jd28b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali62a00f91b2b", MAC:"a2:5c:65:3d:ea:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:29.021154 containerd[1896]: 2026-04-23 23:17:29.016 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" Namespace="calico-system" Pod="csi-node-driver-jd28b" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-csi--node--driver--jd28b-eth0" Apr 23 23:17:29.069383 containerd[1896]: time="2026-04-23T23:17:29.069337088Z" level=info msg="connecting to shim 633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72" address="unix:///run/containerd/s/060b12279431de33de0e2b75dff9dd7d345afa7835cb0f58afea09eea70f5dd1" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:29.105964 systemd[1]: Started cri-containerd-633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72.scope - libcontainer container 633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72. Apr 23 23:17:29.121050 systemd-networkd[1464]: cali9b681c23a75: Link UP Apr 23 23:17:29.122545 systemd-networkd[1464]: cali9b681c23a75: Gained carrier Apr 23 23:17:29.146011 containerd[1896]: 2026-04-23 23:17:28.917 [INFO][5024] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0 goldmane-57885fdd4c- calico-system d38ee980-eebf-48ac-9c4c-b3424746be66 866 0 2026-04-23 23:16:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:57885fdd4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 goldmane-57885fdd4c-ws8th eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9b681c23a75 [] [] }} ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-" Apr 23 23:17:29.146011 containerd[1896]: 2026-04-23 23:17:28.918 [INFO][5024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146011 containerd[1896]: 2026-04-23 23:17:28.956 [INFO][5039] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" HandleID="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Workload="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:28.962 [INFO][5039] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" HandleID="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Workload="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ff2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"goldmane-57885fdd4c-ws8th", "timestamp":"2026-04-23 23:17:28.956214137 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b3080)} Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:28.962 [INFO][5039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:28.992 [INFO][5039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:28.992 [INFO][5039] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:29.066 [INFO][5039] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:29.075 [INFO][5039] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:29.085 [INFO][5039] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:29.089 [INFO][5039] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146195 containerd[1896]: 2026-04-23 23:17:29.092 [INFO][5039] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.092 [INFO][5039] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.093 [INFO][5039] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.102 [INFO][5039] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.112 [INFO][5039] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.196/26] block=192.168.58.192/26 handle="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.113 [INFO][5039] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.196/26] handle="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.113 [INFO][5039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:29.146507 containerd[1896]: 2026-04-23 23:17:29.113 [INFO][5039] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.196/26] IPv6=[] ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" HandleID="k8s-pod-network.28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Workload="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146742 containerd[1896]: 2026-04-23 23:17:29.116 [INFO][5024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"d38ee980-eebf-48ac-9c4c-b3424746be66", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"goldmane-57885fdd4c-ws8th", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b681c23a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:29.146787 containerd[1896]: 2026-04-23 23:17:29.116 [INFO][5024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.196/32] ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146787 containerd[1896]: 2026-04-23 23:17:29.116 [INFO][5024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b681c23a75 ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146787 containerd[1896]: 2026-04-23 23:17:29.123 [INFO][5024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.146833 containerd[1896]: 2026-04-23 23:17:29.123 [INFO][5024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"d38ee980-eebf-48ac-9c4c-b3424746be66", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b", Pod:"goldmane-57885fdd4c-ws8th", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b681c23a75", MAC:"a6:56:06:23:12:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:29.146867 containerd[1896]: 2026-04-23 23:17:29.142 [INFO][5024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" Namespace="calico-system" Pod="goldmane-57885fdd4c-ws8th" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-goldmane--57885fdd4c--ws8th-eth0" Apr 23 23:17:29.153063 systemd-networkd[1464]: calidab217662ac: Gained IPv6LL Apr 23 23:17:29.155035 containerd[1896]: time="2026-04-23T23:17:29.154941415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd28b,Uid:dc579ba9-ac2c-4e73-9991-4e4cc7b4cece,Namespace:calico-system,Attempt:0,} returns sandbox id \"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72\"" Apr 23 23:17:29.195785 containerd[1896]: time="2026-04-23T23:17:29.195747736Z" level=info msg="connecting to shim 28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b" address="unix:///run/containerd/s/c82ed3e8ac71e4db25a8048473a6b83634430469b8959b9fbddfb5f488eb5f8f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:29.217854 systemd[1]: Started cri-containerd-28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b.scope - libcontainer container 28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b. Apr 23 23:17:29.251401 containerd[1896]: time="2026-04-23T23:17:29.251355108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-ws8th,Uid:d38ee980-eebf-48ac-9c4c-b3424746be66,Namespace:calico-system,Attempt:0,} returns sandbox id \"28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b\"" Apr 23 23:17:29.862711 containerd[1896]: time="2026-04-23T23:17:29.862656620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bwz4,Uid:18471ba6-9ba9-4c30-a917-4310db7d988d,Namespace:kube-system,Attempt:0,}" Apr 23 23:17:30.121392 systemd-networkd[1464]: cali8007fa78789: Link UP Apr 23 23:17:30.121728 systemd-networkd[1464]: cali8007fa78789: Gained carrier Apr 23 23:17:30.142823 containerd[1896]: 2026-04-23 23:17:30.038 [INFO][5170] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0 coredns-674b8bbfcf- kube-system 18471ba6-9ba9-4c30-a917-4310db7d988d 862 0 2026-04-23 23:16:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 coredns-674b8bbfcf-6bwz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8007fa78789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-" Apr 23 23:17:30.142823 containerd[1896]: 2026-04-23 23:17:30.038 [INFO][5170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.142823 containerd[1896]: 2026-04-23 23:17:30.065 [INFO][5186] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" HandleID="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.075 [INFO][5186] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" HandleID="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ef410), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-357a044314", "pod":"coredns-674b8bbfcf-6bwz4", "timestamp":"2026-04-23 23:17:30.065765555 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000260dc0)} Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.076 [INFO][5186] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.076 [INFO][5186] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.076 [INFO][5186] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.079 [INFO][5186] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.083 [INFO][5186] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.088 [INFO][5186] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.090 [INFO][5186] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143487 containerd[1896]: 2026-04-23 23:17:30.092 [INFO][5186] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.093 [INFO][5186] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.094 [INFO][5186] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206 Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.099 [INFO][5186] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.113 [INFO][5186] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.197/26] block=192.168.58.192/26 handle="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.113 [INFO][5186] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.197/26] handle="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.113 [INFO][5186] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:30.143638 containerd[1896]: 2026-04-23 23:17:30.113 [INFO][5186] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.197/26] IPv6=[] ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" HandleID="k8s-pod-network.3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.117 [INFO][5170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"18471ba6-9ba9-4c30-a917-4310db7d988d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"coredns-674b8bbfcf-6bwz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8007fa78789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.117 [INFO][5170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.197/32] ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.117 [INFO][5170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8007fa78789 ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.122 [INFO][5170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.123 [INFO][5170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"18471ba6-9ba9-4c30-a917-4310db7d988d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206", Pod:"coredns-674b8bbfcf-6bwz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8007fa78789", MAC:"52:09:e4:c8:49:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:30.144469 containerd[1896]: 2026-04-23 23:17:30.139 [INFO][5170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bwz4" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--6bwz4-eth0" Apr 23 23:17:30.204053 containerd[1896]: time="2026-04-23T23:17:30.203975091Z" level=info msg="connecting to shim 3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206" address="unix:///run/containerd/s/aace8c086a7335468d754687ce4226c1c2ba227cf796d64d81073916ecc26a11" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:30.238841 systemd[1]: Started cri-containerd-3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206.scope - libcontainer container 3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206. Apr 23 23:17:30.285383 containerd[1896]: time="2026-04-23T23:17:30.285346653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bwz4,Uid:18471ba6-9ba9-4c30-a917-4310db7d988d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206\"" Apr 23 23:17:30.297107 containerd[1896]: time="2026-04-23T23:17:30.296895316Z" level=info msg="CreateContainer within sandbox \"3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:17:30.304058 systemd-networkd[1464]: cali9b681c23a75: Gained IPv6LL Apr 23 23:17:30.326977 containerd[1896]: time="2026-04-23T23:17:30.326924360Z" level=info msg="Container dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:30.344689 containerd[1896]: time="2026-04-23T23:17:30.344640987Z" level=info msg="CreateContainer within sandbox \"3708a38f34bc5d674c520fb7f9c85c1b7801535706016278d3c0538521aee206\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99\"" Apr 23 23:17:30.345497 containerd[1896]: time="2026-04-23T23:17:30.345422005Z" level=info msg="StartContainer for \"dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99\"" Apr 23 23:17:30.346596 containerd[1896]: time="2026-04-23T23:17:30.346358396Z" level=info msg="connecting to shim dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99" address="unix:///run/containerd/s/aace8c086a7335468d754687ce4226c1c2ba227cf796d64d81073916ecc26a11" protocol=ttrpc version=3 Apr 23 23:17:30.369829 systemd[1]: Started cri-containerd-dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99.scope - libcontainer container dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99. Apr 23 23:17:30.420101 containerd[1896]: time="2026-04-23T23:17:30.419946797Z" level=info msg="StartContainer for \"dc105686a0dae8489eb8403c4be7e05d30a64acc4e68a0ee06d8c6cc3407ef99\" returns successfully" Apr 23 23:17:30.497150 systemd-networkd[1464]: cali62a00f91b2b: Gained IPv6LL Apr 23 23:17:30.661426 containerd[1896]: time="2026-04-23T23:17:30.661358914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:30.664044 containerd[1896]: time="2026-04-23T23:17:30.664009010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=46169343" Apr 23 23:17:30.669530 containerd[1896]: time="2026-04-23T23:17:30.669496976Z" level=info msg="ImageCreate event name:\"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:30.675988 containerd[1896]: time="2026-04-23T23:17:30.675842779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:30.676430 containerd[1896]: time="2026-04-23T23:17:30.676388693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"48744950\" in 2.575199198s" Apr 23 23:17:30.676430 containerd[1896]: time="2026-04-23T23:17:30.676420414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\"" Apr 23 23:17:30.677159 containerd[1896]: time="2026-04-23T23:17:30.677139326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 23 23:17:30.690013 containerd[1896]: time="2026-04-23T23:17:30.689983936Z" level=info msg="CreateContainer within sandbox \"49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 23 23:17:30.708412 containerd[1896]: time="2026-04-23T23:17:30.708367785Z" level=info msg="Container bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:30.725229 containerd[1896]: time="2026-04-23T23:17:30.725180583Z" level=info msg="CreateContainer within sandbox \"49c817f3b9cf0ce1e7d81a21188c8c2117164a6c96248657db29f5f7c34afc7f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21\"" Apr 23 23:17:30.725882 containerd[1896]: time="2026-04-23T23:17:30.725770250Z" level=info msg="StartContainer for \"bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21\"" Apr 23 23:17:30.727425 containerd[1896]: time="2026-04-23T23:17:30.727388488Z" level=info msg="connecting to shim bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21" address="unix:///run/containerd/s/94992db7e2ba0f388910592119143d36418b25c7e494d3da5917306c662af358" protocol=ttrpc version=3 Apr 23 23:17:30.740822 systemd[1]: Started cri-containerd-bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21.scope - libcontainer container bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21. Apr 23 23:17:30.780107 containerd[1896]: time="2026-04-23T23:17:30.780060739Z" level=info msg="StartContainer for \"bce2b5aee694932235e33d9f0e69d8dcaf3e59b3933a63067a265f4742f27b21\" returns successfully" Apr 23 23:17:30.863702 containerd[1896]: time="2026-04-23T23:17:30.863046899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p88vb,Uid:8665c021-af05-4797-988c-fc76f1cfacc5,Namespace:kube-system,Attempt:0,}" Apr 23 23:17:30.863702 containerd[1896]: time="2026-04-23T23:17:30.863309819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-vjrjh,Uid:07bc0a08-4eeb-4eef-b101-e431b1d35421,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:30.876183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157599428.mount: Deactivated successfully. Apr 23 23:17:30.990109 systemd-networkd[1464]: cali572f1c7346f: Link UP Apr 23 23:17:30.990231 systemd-networkd[1464]: cali572f1c7346f: Gained carrier Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.919 [INFO][5329] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0 coredns-674b8bbfcf- kube-system 8665c021-af05-4797-988c-fc76f1cfacc5 870 0 2026-04-23 23:16:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 coredns-674b8bbfcf-p88vb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali572f1c7346f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.919 [INFO][5329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.943 [INFO][5354] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" HandleID="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.953 [INFO][5354] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" HandleID="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-357a044314", "pod":"coredns-674b8bbfcf-p88vb", "timestamp":"2026-04-23 23:17:30.943982887 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000555ce0)} Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.953 [INFO][5354] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.953 [INFO][5354] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.953 [INFO][5354] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.956 [INFO][5354] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.961 [INFO][5354] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.964 [INFO][5354] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.965 [INFO][5354] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.967 [INFO][5354] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.967 [INFO][5354] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.968 [INFO][5354] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5 Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.973 [INFO][5354] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.983 [INFO][5354] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.198/26] block=192.168.58.192/26 handle="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.983 [INFO][5354] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.198/26] handle="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.983 [INFO][5354] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:31.011510 containerd[1896]: 2026-04-23 23:17:30.983 [INFO][5354] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.198/26] IPv6=[] ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" HandleID="k8s-pod-network.0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Workload="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:30.987 [INFO][5329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8665c021-af05-4797-988c-fc76f1cfacc5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"coredns-674b8bbfcf-p88vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali572f1c7346f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:30.987 [INFO][5329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.198/32] ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:30.987 [INFO][5329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali572f1c7346f ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:30.993 [INFO][5329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:30.993 [INFO][5329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8665c021-af05-4797-988c-fc76f1cfacc5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5", Pod:"coredns-674b8bbfcf-p88vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali572f1c7346f", MAC:"72:19:9a:0c:72:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:31.011950 containerd[1896]: 2026-04-23 23:17:31.008 [INFO][5329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-p88vb" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-coredns--674b8bbfcf--p88vb-eth0" Apr 23 23:17:31.055665 containerd[1896]: time="2026-04-23T23:17:31.055625621Z" level=info msg="connecting to shim 0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5" address="unix:///run/containerd/s/84a650ac23e5368980fd262b6b055c0e8dd67cf45b5c433b10efd7edfeceed2f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:31.097778 kubelet[3406]: I0423 23:17:31.097539 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-544dbfb45c-sgmsk" podStartSLOduration=31.521451319 podStartE2EDuration="34.097524026s" podCreationTimestamp="2026-04-23 23:16:57 +0000 UTC" firstStartedPulling="2026-04-23 23:17:28.100952663 +0000 UTC m=+49.388719523" lastFinishedPulling="2026-04-23 23:17:30.67702537 +0000 UTC m=+51.964792230" observedRunningTime="2026-04-23 23:17:31.097405638 +0000 UTC m=+52.385172498" watchObservedRunningTime="2026-04-23 23:17:31.097524026 +0000 UTC m=+52.385290886" Apr 23 23:17:31.098853 systemd[1]: Started cri-containerd-0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5.scope - libcontainer container 0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5. Apr 23 23:17:31.113751 systemd-networkd[1464]: calia666ed78ebd: Link UP Apr 23 23:17:31.113922 systemd-networkd[1464]: calia666ed78ebd: Gained carrier Apr 23 23:17:31.142587 kubelet[3406]: I0423 23:17:31.142527 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6bwz4" podStartSLOduration=47.14251139 podStartE2EDuration="47.14251139s" podCreationTimestamp="2026-04-23 23:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:17:31.118139254 +0000 UTC m=+52.405906146" watchObservedRunningTime="2026-04-23 23:17:31.14251139 +0000 UTC m=+52.430278250" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.919 [INFO][5339] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0 calico-apiserver-769c4fbb8- calico-system 07bc0a08-4eeb-4eef-b101-e431b1d35421 875 0 2026-04-23 23:16:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769c4fbb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 calico-apiserver-769c4fbb8-vjrjh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia666ed78ebd [] [] }} ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.919 [INFO][5339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.946 [INFO][5352] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" HandleID="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.955 [INFO][5352] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" HandleID="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ef910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"calico-apiserver-769c4fbb8-vjrjh", "timestamp":"2026-04-23 23:17:30.946886431 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a6dc0)} Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.955 [INFO][5352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.984 [INFO][5352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:30.984 [INFO][5352] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.057 [INFO][5352] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.065 [INFO][5352] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.070 [INFO][5352] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.073 [INFO][5352] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.075 [INFO][5352] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.075 [INFO][5352] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.078 [INFO][5352] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0 Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.086 [INFO][5352] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.104 [INFO][5352] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.199/26] block=192.168.58.192/26 handle="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.104 [INFO][5352] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.199/26] handle="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.104 [INFO][5352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:31.145575 containerd[1896]: 2026-04-23 23:17:31.104 [INFO][5352] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.199/26] IPv6=[] ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" HandleID="k8s-pod-network.d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.108 [INFO][5339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0", GenerateName:"calico-apiserver-769c4fbb8-", Namespace:"calico-system", SelfLink:"", UID:"07bc0a08-4eeb-4eef-b101-e431b1d35421", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769c4fbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"calico-apiserver-769c4fbb8-vjrjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia666ed78ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.108 [INFO][5339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.199/32] ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.108 [INFO][5339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia666ed78ebd ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.111 [INFO][5339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.114 [INFO][5339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0", GenerateName:"calico-apiserver-769c4fbb8-", Namespace:"calico-system", SelfLink:"", UID:"07bc0a08-4eeb-4eef-b101-e431b1d35421", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769c4fbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0", Pod:"calico-apiserver-769c4fbb8-vjrjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia666ed78ebd", MAC:"5a:c7:61:7e:c9:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:31.146854 containerd[1896]: 2026-04-23 23:17:31.143 [INFO][5339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-vjrjh" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--vjrjh-eth0" Apr 23 23:17:31.180466 containerd[1896]: time="2026-04-23T23:17:31.180118117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p88vb,Uid:8665c021-af05-4797-988c-fc76f1cfacc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5\"" Apr 23 23:17:31.199239 containerd[1896]: time="2026-04-23T23:17:31.198315401Z" level=info msg="CreateContainer within sandbox \"0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:17:31.200163 containerd[1896]: time="2026-04-23T23:17:31.200121885Z" level=info msg="connecting to shim d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0" address="unix:///run/containerd/s/c9b84b2e41363eb3abdfb92558fc3ce633fe2276f5b072791d37709f3b5669d7" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:31.230212 systemd[1]: Started cri-containerd-d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0.scope - libcontainer container d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0. Apr 23 23:17:31.231311 containerd[1896]: time="2026-04-23T23:17:31.231274430Z" level=info msg="Container b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:31.245277 containerd[1896]: time="2026-04-23T23:17:31.245235317Z" level=info msg="CreateContainer within sandbox \"0116aa7b54554aa2fc659cb134ebaf3caf4f64d541a470964e3135ac6b8a10c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f\"" Apr 23 23:17:31.246260 containerd[1896]: time="2026-04-23T23:17:31.246207645Z" level=info msg="StartContainer for \"b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f\"" Apr 23 23:17:31.247334 containerd[1896]: time="2026-04-23T23:17:31.247300441Z" level=info msg="connecting to shim b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f" address="unix:///run/containerd/s/84a650ac23e5368980fd262b6b055c0e8dd67cf45b5c433b10efd7edfeceed2f" protocol=ttrpc version=3 Apr 23 23:17:31.268828 systemd[1]: Started cri-containerd-b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f.scope - libcontainer container b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f. Apr 23 23:17:31.276982 containerd[1896]: time="2026-04-23T23:17:31.276945824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-vjrjh,Uid:07bc0a08-4eeb-4eef-b101-e431b1d35421,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0\"" Apr 23 23:17:31.303862 containerd[1896]: time="2026-04-23T23:17:31.303740305Z" level=info msg="StartContainer for \"b4e0950b97898e28d94855271b303bfc664e9949d58c5e128482259fad90eb6f\" returns successfully" Apr 23 23:17:31.862151 containerd[1896]: time="2026-04-23T23:17:31.862107821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-zqlhl,Uid:023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1,Namespace:calico-system,Attempt:0,}" Apr 23 23:17:31.986358 systemd-networkd[1464]: cali7fd51be83e2: Link UP Apr 23 23:17:31.989102 systemd-networkd[1464]: cali7fd51be83e2: Gained carrier Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.908 [INFO][5516] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0 calico-apiserver-769c4fbb8- calico-system 023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1 873 0 2026-04-23 23:16:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769c4fbb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-357a044314 calico-apiserver-769c4fbb8-zqlhl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7fd51be83e2 [] [] }} ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.908 [INFO][5516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.930 [INFO][5528] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" HandleID="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.939 [INFO][5528] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" HandleID="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-357a044314", "pod":"calico-apiserver-769c4fbb8-zqlhl", "timestamp":"2026-04-23 23:17:31.930314483 +0000 UTC"}, Hostname:"ci-4459.2.4-n-357a044314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001886e0)} Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.939 [INFO][5528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.939 [INFO][5528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.939 [INFO][5528] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-357a044314' Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.942 [INFO][5528] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.946 [INFO][5528] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.952 [INFO][5528] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.954 [INFO][5528] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.957 [INFO][5528] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.957 [INFO][5528] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.958 [INFO][5528] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.964 [INFO][5528] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.975 [INFO][5528] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.200/26] block=192.168.58.192/26 handle="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.976 [INFO][5528] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.200/26] handle="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" host="ci-4459.2.4-n-357a044314" Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.976 [INFO][5528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:17:32.018729 containerd[1896]: 2026-04-23 23:17:31.976 [INFO][5528] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.200/26] IPv6=[] ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" HandleID="k8s-pod-network.99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Workload="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:31.979 [INFO][5516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0", GenerateName:"calico-apiserver-769c4fbb8-", Namespace:"calico-system", SelfLink:"", UID:"023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769c4fbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"", Pod:"calico-apiserver-769c4fbb8-zqlhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fd51be83e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:31.979 [INFO][5516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.200/32] ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:31.979 [INFO][5516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fd51be83e2 ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:31.990 [INFO][5516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:31.994 [INFO][5516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0", GenerateName:"calico-apiserver-769c4fbb8-", Namespace:"calico-system", SelfLink:"", UID:"023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769c4fbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-357a044314", ContainerID:"99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a", Pod:"calico-apiserver-769c4fbb8-zqlhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fd51be83e2", MAC:"06:b4:7b:b4:89:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:17:32.019920 containerd[1896]: 2026-04-23 23:17:32.015 [INFO][5516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" Namespace="calico-system" Pod="calico-apiserver-769c4fbb8-zqlhl" WorkloadEndpoint="ci--4459.2.4--n--357a044314-k8s-calico--apiserver--769c4fbb8--zqlhl-eth0" Apr 23 23:17:32.033075 systemd-networkd[1464]: cali8007fa78789: Gained IPv6LL Apr 23 23:17:32.080179 containerd[1896]: time="2026-04-23T23:17:32.079804889Z" level=info msg="connecting to shim 99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a" address="unix:///run/containerd/s/838270e433e163515a75d46f5717567bf6c7b0cd35a9c9c39f2c73344bc34cac" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:32.080518 containerd[1896]: time="2026-04-23T23:17:32.080484647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:32.083537 containerd[1896]: time="2026-04-23T23:17:32.083479523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=7895994" Apr 23 23:17:32.087727 containerd[1896]: time="2026-04-23T23:17:32.087019416Z" level=info msg="ImageCreate event name:\"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:32.097671 containerd[1896]: time="2026-04-23T23:17:32.097624800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:32.099370 containerd[1896]: time="2026-04-23T23:17:32.099336248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"10471633\" in 1.421742684s" Apr 23 23:17:32.099480 containerd[1896]: time="2026-04-23T23:17:32.099461501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\"" Apr 23 23:17:32.101790 containerd[1896]: time="2026-04-23T23:17:32.101020832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 23 23:17:32.120599 kubelet[3406]: I0423 23:17:32.120490 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p88vb" podStartSLOduration=48.120470597 podStartE2EDuration="48.120470597s" podCreationTimestamp="2026-04-23 23:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:17:32.117455377 +0000 UTC m=+53.405222245" watchObservedRunningTime="2026-04-23 23:17:32.120470597 +0000 UTC m=+53.408237457" Apr 23 23:17:32.123413 containerd[1896]: time="2026-04-23T23:17:32.123166223Z" level=info msg="CreateContainer within sandbox \"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 23 23:17:32.129196 systemd[1]: Started cri-containerd-99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a.scope - libcontainer container 99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a. Apr 23 23:17:32.267564 containerd[1896]: time="2026-04-23T23:17:32.267522602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769c4fbb8-zqlhl,Uid:023d6ef6-97f0-4853-ac4d-ffc0ba1b54d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a\"" Apr 23 23:17:32.287862 systemd-networkd[1464]: cali572f1c7346f: Gained IPv6LL Apr 23 23:17:32.289228 containerd[1896]: time="2026-04-23T23:17:32.288412102Z" level=info msg="Container f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:32.310799 containerd[1896]: time="2026-04-23T23:17:32.310748851Z" level=info msg="CreateContainer within sandbox \"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb\"" Apr 23 23:17:32.311560 containerd[1896]: time="2026-04-23T23:17:32.311379304Z" level=info msg="StartContainer for \"f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb\"" Apr 23 23:17:32.313902 containerd[1896]: time="2026-04-23T23:17:32.313866203Z" level=info msg="connecting to shim f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb" address="unix:///run/containerd/s/060b12279431de33de0e2b75dff9dd7d345afa7835cb0f58afea09eea70f5dd1" protocol=ttrpc version=3 Apr 23 23:17:32.331816 systemd[1]: Started cri-containerd-f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb.scope - libcontainer container f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb. Apr 23 23:17:32.381631 containerd[1896]: time="2026-04-23T23:17:32.381459140Z" level=info msg="StartContainer for \"f109dc25c4e67b4a3fb483dc0f4e379aeb1ae9034c4449ded10594dd244a0aeb\" returns successfully" Apr 23 23:17:32.863886 systemd-networkd[1464]: calia666ed78ebd: Gained IPv6LL Apr 23 23:17:33.503908 systemd-networkd[1464]: cali7fd51be83e2: Gained IPv6LL Apr 23 23:17:34.021277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188404114.mount: Deactivated successfully. Apr 23 23:17:34.329413 containerd[1896]: time="2026-04-23T23:17:34.328817085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:34.332815 containerd[1896]: time="2026-04-23T23:17:34.332786689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=48513326" Apr 23 23:17:34.335988 containerd[1896]: time="2026-04-23T23:17:34.335948521Z" level=info msg="ImageCreate event name:\"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:34.343551 containerd[1896]: time="2026-04-23T23:17:34.343511819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:34.344188 containerd[1896]: time="2026-04-23T23:17:34.343955827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"48513172\" in 2.242904594s" Apr 23 23:17:34.344188 containerd[1896]: time="2026-04-23T23:17:34.344102592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\"" Apr 23 23:17:34.345062 containerd[1896]: time="2026-04-23T23:17:34.345035785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:17:34.353040 containerd[1896]: time="2026-04-23T23:17:34.353003762Z" level=info msg="CreateContainer within sandbox \"28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 23 23:17:34.375313 containerd[1896]: time="2026-04-23T23:17:34.374865916Z" level=info msg="Container 2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:34.393937 containerd[1896]: time="2026-04-23T23:17:34.393808224Z" level=info msg="CreateContainer within sandbox \"28f50467d6d66dadd9cbc044434a1215502c8f0946bb0f3f8b3ec8658dc4976b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1\"" Apr 23 23:17:34.394791 containerd[1896]: time="2026-04-23T23:17:34.394559883Z" level=info msg="StartContainer for \"2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1\"" Apr 23 23:17:34.395600 containerd[1896]: time="2026-04-23T23:17:34.395571806Z" level=info msg="connecting to shim 2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1" address="unix:///run/containerd/s/c82ed3e8ac71e4db25a8048473a6b83634430469b8959b9fbddfb5f488eb5f8f" protocol=ttrpc version=3 Apr 23 23:17:34.415839 systemd[1]: Started cri-containerd-2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1.scope - libcontainer container 2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1. Apr 23 23:17:34.458306 containerd[1896]: time="2026-04-23T23:17:34.458214775Z" level=info msg="StartContainer for \"2c86e73b17843c924bf31f2019f486d245f42e29a8bdd6539e1c6e71a145cba1\" returns successfully" Apr 23 23:17:37.703163 containerd[1896]: time="2026-04-23T23:17:37.702590581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:37.705732 containerd[1896]: time="2026-04-23T23:17:37.705700347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=42617669" Apr 23 23:17:37.708768 containerd[1896]: time="2026-04-23T23:17:37.708724645Z" level=info msg="ImageCreate event name:\"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:37.712778 containerd[1896]: time="2026-04-23T23:17:37.712730803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:37.713462 containerd[1896]: time="2026-04-23T23:17:37.713211068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 3.368144241s" Apr 23 23:17:37.713462 containerd[1896]: time="2026-04-23T23:17:37.713242821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 23 23:17:37.714775 containerd[1896]: time="2026-04-23T23:17:37.714723849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:17:37.721242 containerd[1896]: time="2026-04-23T23:17:37.720766742Z" level=info msg="CreateContainer within sandbox \"d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:17:37.741969 containerd[1896]: time="2026-04-23T23:17:37.741921320Z" level=info msg="Container 5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:37.743975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159336966.mount: Deactivated successfully. Apr 23 23:17:37.762925 containerd[1896]: time="2026-04-23T23:17:37.762880211Z" level=info msg="CreateContainer within sandbox \"d1ef409d2d5aa551b3220aeed885799bab6ea40d845fef6e73c8024c38f5f7e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b\"" Apr 23 23:17:37.763942 containerd[1896]: time="2026-04-23T23:17:37.763885406Z" level=info msg="StartContainer for \"5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b\"" Apr 23 23:17:37.765552 containerd[1896]: time="2026-04-23T23:17:37.765528192Z" level=info msg="connecting to shim 5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b" address="unix:///run/containerd/s/c9b84b2e41363eb3abdfb92558fc3ce633fe2276f5b072791d37709f3b5669d7" protocol=ttrpc version=3 Apr 23 23:17:37.788818 systemd[1]: Started cri-containerd-5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b.scope - libcontainer container 5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b. Apr 23 23:17:37.824591 containerd[1896]: time="2026-04-23T23:17:37.824493663Z" level=info msg="StartContainer for \"5c6340240394eec65d3867bdb16274b0ea6c19c6daf527a44f605b3173aad90b\" returns successfully" Apr 23 23:17:38.067701 containerd[1896]: time="2026-04-23T23:17:38.067405986Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:38.070983 containerd[1896]: time="2026-04-23T23:17:38.070751712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 23 23:17:38.072443 containerd[1896]: time="2026-04-23T23:17:38.072408418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 357.659712ms" Apr 23 23:17:38.072612 containerd[1896]: time="2026-04-23T23:17:38.072549319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 23 23:17:38.074036 containerd[1896]: time="2026-04-23T23:17:38.073605740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 23 23:17:38.080277 containerd[1896]: time="2026-04-23T23:17:38.080253375Z" level=info msg="CreateContainer within sandbox \"99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:17:38.106561 containerd[1896]: time="2026-04-23T23:17:38.105948641Z" level=info msg="Container d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:38.126443 containerd[1896]: time="2026-04-23T23:17:38.126402698Z" level=info msg="CreateContainer within sandbox \"99ecc741366a225ddfbb2c3b04c221a06e14f70a1b114aae88de60d6d5899a5a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1\"" Apr 23 23:17:38.126930 containerd[1896]: time="2026-04-23T23:17:38.126907844Z" level=info msg="StartContainer for \"d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1\"" Apr 23 23:17:38.128369 containerd[1896]: time="2026-04-23T23:17:38.128348878Z" level=info msg="connecting to shim d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1" address="unix:///run/containerd/s/838270e433e163515a75d46f5717567bf6c7b0cd35a9c9c39f2c73344bc34cac" protocol=ttrpc version=3 Apr 23 23:17:38.148851 kubelet[3406]: I0423 23:17:38.148785 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-57885fdd4c-ws8th" podStartSLOduration=37.056511889 podStartE2EDuration="42.148771238s" podCreationTimestamp="2026-04-23 23:16:56 +0000 UTC" firstStartedPulling="2026-04-23 23:17:29.252639607 +0000 UTC m=+50.540406467" lastFinishedPulling="2026-04-23 23:17:34.344898948 +0000 UTC m=+55.632665816" observedRunningTime="2026-04-23 23:17:35.120321781 +0000 UTC m=+56.408088641" watchObservedRunningTime="2026-04-23 23:17:38.148771238 +0000 UTC m=+59.436538098" Apr 23 23:17:38.149212 kubelet[3406]: I0423 23:17:38.149047 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-769c4fbb8-vjrjh" podStartSLOduration=35.713223185 podStartE2EDuration="42.149041232s" podCreationTimestamp="2026-04-23 23:16:56 +0000 UTC" firstStartedPulling="2026-04-23 23:17:31.278303653 +0000 UTC m=+52.566070513" lastFinishedPulling="2026-04-23 23:17:37.7141217 +0000 UTC m=+59.001888560" observedRunningTime="2026-04-23 23:17:38.148762718 +0000 UTC m=+59.436529602" watchObservedRunningTime="2026-04-23 23:17:38.149041232 +0000 UTC m=+59.436808092" Apr 23 23:17:38.156925 systemd[1]: Started cri-containerd-d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1.scope - libcontainer container d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1. Apr 23 23:17:38.202973 containerd[1896]: time="2026-04-23T23:17:38.202933108Z" level=info msg="StartContainer for \"d30112d90f0fb88d1c0bb1a29e392aa9881371897357a7a25f204888789892b1\" returns successfully" Apr 23 23:17:39.135111 kubelet[3406]: I0423 23:17:39.134579 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-769c4fbb8-zqlhl" podStartSLOduration=37.330690485 podStartE2EDuration="43.13456347s" podCreationTimestamp="2026-04-23 23:16:56 +0000 UTC" firstStartedPulling="2026-04-23 23:17:32.269514612 +0000 UTC m=+53.557281472" lastFinishedPulling="2026-04-23 23:17:38.073387589 +0000 UTC m=+59.361154457" observedRunningTime="2026-04-23 23:17:39.134144528 +0000 UTC m=+60.421911396" watchObservedRunningTime="2026-04-23 23:17:39.13456347 +0000 UTC m=+60.422330330" Apr 23 23:17:39.914817 containerd[1896]: time="2026-04-23T23:17:39.914728733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=12456618" Apr 23 23:17:39.916289 containerd[1896]: time="2026-04-23T23:17:39.916256259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:39.918463 containerd[1896]: time="2026-04-23T23:17:39.918414527Z" level=info msg="ImageCreate event name:\"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:39.923065 containerd[1896]: time="2026-04-23T23:17:39.923008297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:17:39.923422 containerd[1896]: time="2026-04-23T23:17:39.923357221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"15032209\" in 1.84939054s" Apr 23 23:17:39.923422 containerd[1896]: time="2026-04-23T23:17:39.923387846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\"" Apr 23 23:17:39.932804 containerd[1896]: time="2026-04-23T23:17:39.932759321Z" level=info msg="CreateContainer within sandbox \"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 23 23:17:39.951933 containerd[1896]: time="2026-04-23T23:17:39.951890283Z" level=info msg="Container b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:39.956146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528359019.mount: Deactivated successfully. Apr 23 23:17:39.983138 containerd[1896]: time="2026-04-23T23:17:39.983091007Z" level=info msg="CreateContainer within sandbox \"633726fc6dd8407bc508f734aa41eb7993c083b76dbbcd0cf46b02ce3c867d72\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1\"" Apr 23 23:17:39.983919 containerd[1896]: time="2026-04-23T23:17:39.983877227Z" level=info msg="StartContainer for \"b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1\"" Apr 23 23:17:39.985323 containerd[1896]: time="2026-04-23T23:17:39.985267092Z" level=info msg="connecting to shim b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1" address="unix:///run/containerd/s/060b12279431de33de0e2b75dff9dd7d345afa7835cb0f58afea09eea70f5dd1" protocol=ttrpc version=3 Apr 23 23:17:40.025842 systemd[1]: Started cri-containerd-b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1.scope - libcontainer container b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1. Apr 23 23:17:40.080000 containerd[1896]: time="2026-04-23T23:17:40.079887899Z" level=info msg="StartContainer for \"b30d6bccf1378fb5cf3566b28a6c9590f41d0ef6030d20cbc9d8bbcf9f3b56d1\" returns successfully" Apr 23 23:17:40.971177 kubelet[3406]: I0423 23:17:40.970975 3406 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 23 23:17:40.971177 kubelet[3406]: I0423 23:17:40.971019 3406 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 23 23:17:49.148193 kubelet[3406]: I0423 23:17:49.147494 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jd28b" podStartSLOduration=41.379125578 podStartE2EDuration="52.147480939s" podCreationTimestamp="2026-04-23 23:16:57 +0000 UTC" firstStartedPulling="2026-04-23 23:17:29.157111599 +0000 UTC m=+50.444878459" lastFinishedPulling="2026-04-23 23:17:39.92546696 +0000 UTC m=+61.213233820" observedRunningTime="2026-04-23 23:17:40.138917484 +0000 UTC m=+61.426684368" watchObservedRunningTime="2026-04-23 23:17:49.147480939 +0000 UTC m=+70.435247799" Apr 23 23:19:32.283853 systemd[1]: Started sshd@7-10.0.0.29:22-50.85.169.122:58120.service - OpenSSH per-connection server daemon (50.85.169.122:58120). Apr 23 23:19:33.063452 sshd[6251]: Accepted publickey for core from 50.85.169.122 port 58120 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:33.065304 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:33.069606 systemd-logind[1874]: New session 10 of user core. Apr 23 23:19:33.078854 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 23:19:33.582346 sshd[6254]: Connection closed by 50.85.169.122 port 58120 Apr 23 23:19:33.581838 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:33.585710 systemd-logind[1874]: Session 10 logged out. Waiting for processes to exit. Apr 23 23:19:33.585918 systemd[1]: sshd@7-10.0.0.29:22-50.85.169.122:58120.service: Deactivated successfully. Apr 23 23:19:33.588120 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 23:19:33.589966 systemd-logind[1874]: Removed session 10. Apr 23 23:19:38.736441 systemd[1]: Started sshd@8-10.0.0.29:22-50.85.169.122:58136.service - OpenSSH per-connection server daemon (50.85.169.122:58136). Apr 23 23:19:39.491588 sshd[6312]: Accepted publickey for core from 50.85.169.122 port 58136 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:39.492372 sshd-session[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:39.496496 systemd-logind[1874]: New session 11 of user core. Apr 23 23:19:39.500824 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 23:19:39.972502 sshd[6317]: Connection closed by 50.85.169.122 port 58136 Apr 23 23:19:39.973073 sshd-session[6312]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:39.976439 systemd[1]: sshd@8-10.0.0.29:22-50.85.169.122:58136.service: Deactivated successfully. Apr 23 23:19:39.978263 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 23:19:39.979124 systemd-logind[1874]: Session 11 logged out. Waiting for processes to exit. Apr 23 23:19:39.980730 systemd-logind[1874]: Removed session 11. Apr 23 23:19:45.137060 systemd[1]: Started sshd@9-10.0.0.29:22-50.85.169.122:46602.service - OpenSSH per-connection server daemon (50.85.169.122:46602). Apr 23 23:19:45.909631 sshd[6332]: Accepted publickey for core from 50.85.169.122 port 46602 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:45.910736 sshd-session[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:45.914597 systemd-logind[1874]: New session 12 of user core. Apr 23 23:19:45.920833 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 23:19:46.399031 sshd[6335]: Connection closed by 50.85.169.122 port 46602 Apr 23 23:19:46.399613 sshd-session[6332]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:46.403435 systemd[1]: sshd@9-10.0.0.29:22-50.85.169.122:46602.service: Deactivated successfully. Apr 23 23:19:46.405462 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 23:19:46.407498 systemd-logind[1874]: Session 12 logged out. Waiting for processes to exit. Apr 23 23:19:46.408762 systemd-logind[1874]: Removed session 12. Apr 23 23:19:51.552137 systemd[1]: Started sshd@10-10.0.0.29:22-50.85.169.122:35050.service - OpenSSH per-connection server daemon (50.85.169.122:35050). Apr 23 23:19:52.300585 sshd[6397]: Accepted publickey for core from 50.85.169.122 port 35050 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:52.301582 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:52.305467 systemd-logind[1874]: New session 13 of user core. Apr 23 23:19:52.309873 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 23:19:52.776033 sshd[6400]: Connection closed by 50.85.169.122 port 35050 Apr 23 23:19:52.776722 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:52.780237 systemd[1]: sshd@10-10.0.0.29:22-50.85.169.122:35050.service: Deactivated successfully. Apr 23 23:19:52.781858 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 23:19:52.783872 systemd-logind[1874]: Session 13 logged out. Waiting for processes to exit. Apr 23 23:19:52.785399 systemd-logind[1874]: Removed session 13. Apr 23 23:19:52.929506 systemd[1]: Started sshd@11-10.0.0.29:22-50.85.169.122:35062.service - OpenSSH per-connection server daemon (50.85.169.122:35062). Apr 23 23:19:53.690198 sshd[6412]: Accepted publickey for core from 50.85.169.122 port 35062 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:53.691265 sshd-session[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:53.695402 systemd-logind[1874]: New session 14 of user core. Apr 23 23:19:53.698827 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 23:19:54.206582 sshd[6415]: Connection closed by 50.85.169.122 port 35062 Apr 23 23:19:54.206964 sshd-session[6412]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:54.210992 systemd[1]: sshd@11-10.0.0.29:22-50.85.169.122:35062.service: Deactivated successfully. Apr 23 23:19:54.212945 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 23:19:54.213650 systemd-logind[1874]: Session 14 logged out. Waiting for processes to exit. Apr 23 23:19:54.214914 systemd-logind[1874]: Removed session 14. Apr 23 23:19:54.359893 systemd[1]: Started sshd@12-10.0.0.29:22-50.85.169.122:35074.service - OpenSSH per-connection server daemon (50.85.169.122:35074). Apr 23 23:19:55.106674 sshd[6425]: Accepted publickey for core from 50.85.169.122 port 35074 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:19:55.107905 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:19:55.111877 systemd-logind[1874]: New session 15 of user core. Apr 23 23:19:55.116828 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 23:19:55.614151 sshd[6428]: Connection closed by 50.85.169.122 port 35074 Apr 23 23:19:55.613699 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Apr 23 23:19:55.616904 systemd[1]: sshd@12-10.0.0.29:22-50.85.169.122:35074.service: Deactivated successfully. Apr 23 23:19:55.618964 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 23:19:55.620161 systemd-logind[1874]: Session 15 logged out. Waiting for processes to exit. Apr 23 23:19:55.621211 systemd-logind[1874]: Removed session 15. Apr 23 23:20:00.771869 systemd[1]: Started sshd@13-10.0.0.29:22-50.85.169.122:49084.service - OpenSSH per-connection server daemon (50.85.169.122:49084). Apr 23 23:20:01.543039 sshd[6445]: Accepted publickey for core from 50.85.169.122 port 49084 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:01.544092 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:01.547855 systemd-logind[1874]: New session 16 of user core. Apr 23 23:20:01.551823 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 23:20:02.035810 sshd[6449]: Connection closed by 50.85.169.122 port 49084 Apr 23 23:20:02.035611 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:02.040260 systemd-logind[1874]: Session 16 logged out. Waiting for processes to exit. Apr 23 23:20:02.040434 systemd[1]: sshd@13-10.0.0.29:22-50.85.169.122:49084.service: Deactivated successfully. Apr 23 23:20:02.042674 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 23:20:02.045027 systemd-logind[1874]: Removed session 16. Apr 23 23:20:02.191848 systemd[1]: Started sshd@14-10.0.0.29:22-50.85.169.122:49096.service - OpenSSH per-connection server daemon (50.85.169.122:49096). Apr 23 23:20:02.942036 sshd[6481]: Accepted publickey for core from 50.85.169.122 port 49096 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:02.943155 sshd-session[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:02.947336 systemd-logind[1874]: New session 17 of user core. Apr 23 23:20:02.951831 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 23:20:04.451668 sshd[6484]: Connection closed by 50.85.169.122 port 49096 Apr 23 23:20:04.468297 sshd-session[6481]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:04.471929 systemd[1]: sshd@14-10.0.0.29:22-50.85.169.122:49096.service: Deactivated successfully. Apr 23 23:20:04.473537 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 23:20:04.474282 systemd-logind[1874]: Session 17 logged out. Waiting for processes to exit. Apr 23 23:20:04.475988 systemd-logind[1874]: Removed session 17. Apr 23 23:20:04.597738 systemd[1]: Started sshd@15-10.0.0.29:22-50.85.169.122:49102.service - OpenSSH per-connection server daemon (50.85.169.122:49102). Apr 23 23:20:05.347073 sshd[6494]: Accepted publickey for core from 50.85.169.122 port 49102 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:05.347883 sshd-session[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:05.352058 systemd-logind[1874]: New session 18 of user core. Apr 23 23:20:05.356842 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 23:20:06.345259 sshd[6497]: Connection closed by 50.85.169.122 port 49102 Apr 23 23:20:06.344267 sshd-session[6494]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:06.347160 systemd-logind[1874]: Session 18 logged out. Waiting for processes to exit. Apr 23 23:20:06.347307 systemd[1]: sshd@15-10.0.0.29:22-50.85.169.122:49102.service: Deactivated successfully. Apr 23 23:20:06.349598 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 23:20:06.351622 systemd-logind[1874]: Removed session 18. Apr 23 23:20:06.511725 systemd[1]: Started sshd@16-10.0.0.29:22-50.85.169.122:49114.service - OpenSSH per-connection server daemon (50.85.169.122:49114). Apr 23 23:20:07.292636 sshd[6521]: Accepted publickey for core from 50.85.169.122 port 49114 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:07.293797 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:07.297771 systemd-logind[1874]: New session 19 of user core. Apr 23 23:20:07.301840 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 23:20:07.877711 sshd[6548]: Connection closed by 50.85.169.122 port 49114 Apr 23 23:20:07.877315 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:07.880345 systemd-logind[1874]: Session 19 logged out. Waiting for processes to exit. Apr 23 23:20:07.880729 systemd[1]: sshd@16-10.0.0.29:22-50.85.169.122:49114.service: Deactivated successfully. Apr 23 23:20:07.882563 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 23:20:07.884591 systemd-logind[1874]: Removed session 19. Apr 23 23:20:08.030913 systemd[1]: Started sshd@17-10.0.0.29:22-50.85.169.122:49130.service - OpenSSH per-connection server daemon (50.85.169.122:49130). Apr 23 23:20:08.791811 sshd[6557]: Accepted publickey for core from 50.85.169.122 port 49130 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:08.792907 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:08.796754 systemd-logind[1874]: New session 20 of user core. Apr 23 23:20:08.800816 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 23:20:09.273411 sshd[6560]: Connection closed by 50.85.169.122 port 49130 Apr 23 23:20:09.273985 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:09.278225 systemd[1]: sshd@17-10.0.0.29:22-50.85.169.122:49130.service: Deactivated successfully. Apr 23 23:20:09.280237 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 23:20:09.281068 systemd-logind[1874]: Session 20 logged out. Waiting for processes to exit. Apr 23 23:20:09.282384 systemd-logind[1874]: Removed session 20. Apr 23 23:20:14.429037 systemd[1]: Started sshd@18-10.0.0.29:22-50.85.169.122:39242.service - OpenSSH per-connection server daemon (50.85.169.122:39242). Apr 23 23:20:15.174429 sshd[6576]: Accepted publickey for core from 50.85.169.122 port 39242 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:15.175543 sshd-session[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:15.179315 systemd-logind[1874]: New session 21 of user core. Apr 23 23:20:15.183821 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 23 23:20:15.650670 sshd[6581]: Connection closed by 50.85.169.122 port 39242 Apr 23 23:20:15.651231 sshd-session[6576]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:15.654181 systemd[1]: sshd@18-10.0.0.29:22-50.85.169.122:39242.service: Deactivated successfully. Apr 23 23:20:15.657517 systemd[1]: session-21.scope: Deactivated successfully. Apr 23 23:20:15.658594 systemd-logind[1874]: Session 21 logged out. Waiting for processes to exit. Apr 23 23:20:15.660869 systemd-logind[1874]: Removed session 21. Apr 23 23:20:20.802446 systemd[1]: Started sshd@19-10.0.0.29:22-50.85.169.122:56814.service - OpenSSH per-connection server daemon (50.85.169.122:56814). Apr 23 23:20:21.548668 sshd[6619]: Accepted publickey for core from 50.85.169.122 port 56814 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:21.549828 sshd-session[6619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:21.553581 systemd-logind[1874]: New session 22 of user core. Apr 23 23:20:21.559811 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 23 23:20:22.022002 sshd[6622]: Connection closed by 50.85.169.122 port 56814 Apr 23 23:20:22.022413 sshd-session[6619]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:22.026386 systemd[1]: sshd@19-10.0.0.29:22-50.85.169.122:56814.service: Deactivated successfully. Apr 23 23:20:22.028544 systemd[1]: session-22.scope: Deactivated successfully. Apr 23 23:20:22.031125 systemd-logind[1874]: Session 22 logged out. Waiting for processes to exit. Apr 23 23:20:22.032225 systemd-logind[1874]: Removed session 22. Apr 23 23:20:27.179738 systemd[1]: Started sshd@20-10.0.0.29:22-50.85.169.122:56816.service - OpenSSH per-connection server daemon (50.85.169.122:56816). Apr 23 23:20:27.933342 sshd[6634]: Accepted publickey for core from 50.85.169.122 port 56816 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:27.934418 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:27.938787 systemd-logind[1874]: New session 23 of user core. Apr 23 23:20:27.942832 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 23 23:20:28.417725 sshd[6637]: Connection closed by 50.85.169.122 port 56816 Apr 23 23:20:28.418268 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:28.421483 systemd[1]: sshd@20-10.0.0.29:22-50.85.169.122:56816.service: Deactivated successfully. Apr 23 23:20:28.422280 systemd-logind[1874]: Session 23 logged out. Waiting for processes to exit. Apr 23 23:20:28.427110 systemd[1]: session-23.scope: Deactivated successfully. Apr 23 23:20:28.430110 systemd-logind[1874]: Removed session 23. Apr 23 23:20:33.568196 systemd[1]: Started sshd@21-10.0.0.29:22-50.85.169.122:42336.service - OpenSSH per-connection server daemon (50.85.169.122:42336). Apr 23 23:20:34.315746 sshd[6694]: Accepted publickey for core from 50.85.169.122 port 42336 ssh2: RSA SHA256:OE/BzpIjp/Jg1G36L5zUqHa7NG/Z9l5Fwb+VInZbsf0 Apr 23 23:20:34.316735 sshd-session[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:20:34.321106 systemd-logind[1874]: New session 24 of user core. Apr 23 23:20:34.328846 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 23 23:20:34.790029 sshd[6697]: Connection closed by 50.85.169.122 port 42336 Apr 23 23:20:34.791570 sshd-session[6694]: pam_unix(sshd:session): session closed for user core Apr 23 23:20:34.794855 systemd[1]: sshd@21-10.0.0.29:22-50.85.169.122:42336.service: Deactivated successfully. Apr 23 23:20:34.796666 systemd[1]: session-24.scope: Deactivated successfully. Apr 23 23:20:34.797476 systemd-logind[1874]: Session 24 logged out. Waiting for processes to exit. Apr 23 23:20:34.798753 systemd-logind[1874]: Removed session 24.