Apr 17 01:02:36.063657 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Apr 17 01:02:36.063674 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 17 01:02:36.063680 kernel: KASLR enabled Apr 17 01:02:36.063684 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 17 01:02:36.063688 kernel: printk: legacy bootconsole [pl11] enabled Apr 17 01:02:36.063692 kernel: efi: EFI v2.7 by EDK II Apr 17 01:02:36.063697 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Apr 17 01:02:36.063701 kernel: random: crng init done Apr 17 01:02:36.063705 kernel: secureboot: Secure boot disabled Apr 17 01:02:36.063709 kernel: ACPI: Early table checksum verification disabled Apr 17 01:02:36.063713 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Apr 17 01:02:36.063717 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063721 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063725 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 17 01:02:36.063731 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063735 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063739 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063743 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063748 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063753 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063757 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 17 01:02:36.063761 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 01:02:36.063766 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 17 01:02:36.063770 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 17 01:02:36.063774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 17 01:02:36.063778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Apr 17 01:02:36.063782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Apr 17 01:02:36.063787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 17 01:02:36.063791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 17 01:02:36.063795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 17 01:02:36.063800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 17 01:02:36.063804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 17 01:02:36.063808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 17 01:02:36.063813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 17 01:02:36.063817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 17 01:02:36.063821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 17 01:02:36.063825 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Apr 17 01:02:36.063830 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Apr 17 01:02:36.063834 kernel: Zone ranges: Apr 17 01:02:36.063838 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 17 01:02:36.063845 kernel: DMA32 empty Apr 17 01:02:36.063849 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 17 01:02:36.063853 kernel: Device empty Apr 17 01:02:36.063858 kernel: Movable zone start for each node Apr 17 01:02:36.063862 kernel: Early memory node ranges Apr 17 01:02:36.063866 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 17 01:02:36.063872 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Apr 17 01:02:36.063876 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Apr 17 01:02:36.063880 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Apr 17 01:02:36.063885 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Apr 17 01:02:36.063889 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Apr 17 01:02:36.063893 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 17 01:02:36.063898 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 17 01:02:36.063902 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 17 01:02:36.063907 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Apr 17 01:02:36.063911 kernel: psci: probing for conduit method from ACPI. Apr 17 01:02:36.063915 kernel: psci: PSCIv1.3 detected in firmware. Apr 17 01:02:36.063920 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 01:02:36.063925 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 17 01:02:36.063929 kernel: psci: SMC Calling Convention v1.4 Apr 17 01:02:36.063933 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 17 01:02:36.063938 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 17 01:02:36.063942 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 17 01:02:36.063946 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 17 01:02:36.063951 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 01:02:36.063955 kernel: Detected PIPT I-cache on CPU0 Apr 17 01:02:36.063959 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Apr 17 01:02:36.063964 kernel: CPU features: detected: GIC system register CPU interface Apr 17 01:02:36.063968 kernel: CPU features: detected: Spectre-v4 Apr 17 01:02:36.063973 kernel: CPU features: detected: Spectre-BHB Apr 17 01:02:36.063978 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 17 01:02:36.063982 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 17 01:02:36.063987 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Apr 17 01:02:36.063991 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 17 01:02:36.063995 kernel: alternatives: applying boot alternatives Apr 17 01:02:36.064001 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 17 01:02:36.064005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 01:02:36.064010 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 01:02:36.064014 kernel: Fallback order for Node 0: 0 Apr 17 01:02:36.064019 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Apr 17 01:02:36.064024 kernel: Policy zone: Normal Apr 17 01:02:36.064028 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 01:02:36.064032 kernel: software IO TLB: area num 2. Apr 17 01:02:36.064037 kernel: software IO TLB: mapped [mem 0x00000000358f0000-0x00000000398f0000] (64MB) Apr 17 01:02:36.064041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 01:02:36.064045 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 01:02:36.064051 kernel: rcu: RCU event tracing is enabled. Apr 17 01:02:36.064055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 01:02:36.064060 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 01:02:36.064064 kernel: Tracing variant of Tasks RCU enabled. Apr 17 01:02:36.064068 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 01:02:36.064073 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 01:02:36.064089 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 01:02:36.064094 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 01:02:36.064098 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 01:02:36.064102 kernel: GICv3: 960 SPIs implemented Apr 17 01:02:36.064107 kernel: GICv3: 0 Extended SPIs implemented Apr 17 01:02:36.064111 kernel: Root IRQ handler: gic_handle_irq Apr 17 01:02:36.064115 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 17 01:02:36.064120 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Apr 17 01:02:36.064124 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 17 01:02:36.064128 kernel: ITS: No ITS available, not enabling LPIs Apr 17 01:02:36.064133 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 01:02:36.064138 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Apr 17 01:02:36.064143 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 01:02:36.064147 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Apr 17 01:02:36.064152 kernel: Console: colour dummy device 80x25 Apr 17 01:02:36.064156 kernel: printk: legacy console [tty1] enabled Apr 17 01:02:36.064161 kernel: ACPI: Core revision 20240827 Apr 17 01:02:36.064165 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Apr 17 01:02:36.064170 kernel: pid_max: default: 32768 minimum: 301 Apr 17 01:02:36.064175 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 01:02:36.064179 kernel: landlock: Up and running. Apr 17 01:02:36.064184 kernel: SELinux: Initializing. Apr 17 01:02:36.064189 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 01:02:36.064193 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 01:02:36.064198 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Apr 17 01:02:36.064202 kernel: Hyper-V: Host Build 10.0.26102.1283-1-0 Apr 17 01:02:36.064210 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 17 01:02:36.064216 kernel: rcu: Hierarchical SRCU implementation. Apr 17 01:02:36.064221 kernel: rcu: Max phase no-delay instances is 400. Apr 17 01:02:36.064225 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 01:02:36.064230 kernel: Remapping and enabling EFI services. Apr 17 01:02:36.064235 kernel: smp: Bringing up secondary CPUs ... Apr 17 01:02:36.064240 kernel: Detected PIPT I-cache on CPU1 Apr 17 01:02:36.064245 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 17 01:02:36.064250 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Apr 17 01:02:36.064255 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 01:02:36.064259 kernel: SMP: Total of 2 processors activated. Apr 17 01:02:36.064264 kernel: CPU: All CPU(s) started at EL1 Apr 17 01:02:36.064270 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 01:02:36.064274 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 17 01:02:36.064279 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 17 01:02:36.064284 kernel: CPU features: detected: Common not Private translations Apr 17 01:02:36.064289 kernel: CPU features: detected: CRC32 instructions Apr 17 01:02:36.064294 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Apr 17 01:02:36.064298 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 17 01:02:36.064303 kernel: CPU features: detected: LSE atomic instructions Apr 17 01:02:36.064308 kernel: CPU features: detected: Privileged Access Never Apr 17 01:02:36.064313 kernel: CPU features: detected: Speculation barrier (SB) Apr 17 01:02:36.064318 kernel: CPU features: detected: TLB range maintenance instructions Apr 17 01:02:36.064323 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 17 01:02:36.064328 kernel: CPU features: detected: Scalable Vector Extension Apr 17 01:02:36.064332 kernel: alternatives: applying system-wide alternatives Apr 17 01:02:36.064337 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 17 01:02:36.064342 kernel: SVE: maximum available vector length 16 bytes per vector Apr 17 01:02:36.064347 kernel: SVE: default vector length 16 bytes per vector Apr 17 01:02:36.064351 kernel: Memory: 3952756K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 220208K reserved, 16384K cma-reserved) Apr 17 01:02:36.064357 kernel: devtmpfs: initialized Apr 17 01:02:36.064362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 01:02:36.064367 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 01:02:36.064371 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 17 01:02:36.064376 kernel: 0 pages in range for non-PLT usage Apr 17 01:02:36.064381 kernel: 508384 pages in range for PLT usage Apr 17 01:02:36.064385 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 01:02:36.064390 kernel: SMBIOS 3.1.0 present. Apr 17 01:02:36.064395 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 17 01:02:36.064401 kernel: DMI: Memory slots populated: 2/2 Apr 17 01:02:36.064405 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 01:02:36.064410 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 01:02:36.064415 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 01:02:36.064420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 01:02:36.064424 kernel: audit: initializing netlink subsys (disabled) Apr 17 01:02:36.064429 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Apr 17 01:02:36.064434 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 01:02:36.064439 kernel: cpuidle: using governor menu Apr 17 01:02:36.064444 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 01:02:36.064449 kernel: ASID allocator initialised with 32768 entries Apr 17 01:02:36.064453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 01:02:36.064458 kernel: Serial: AMBA PL011 UART driver Apr 17 01:02:36.064463 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 01:02:36.064468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 01:02:36.064472 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 01:02:36.064477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 01:02:36.064483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 01:02:36.064488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 01:02:36.064492 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 01:02:36.064497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 01:02:36.064502 kernel: ACPI: Added _OSI(Module Device) Apr 17 01:02:36.064507 kernel: ACPI: Added _OSI(Processor Device) Apr 17 01:02:36.064511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 01:02:36.064516 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 01:02:36.064521 kernel: ACPI: Interpreter enabled Apr 17 01:02:36.064526 kernel: ACPI: Using GIC for interrupt routing Apr 17 01:02:36.064531 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 17 01:02:36.064536 kernel: printk: legacy console [ttyAMA0] enabled Apr 17 01:02:36.064540 kernel: printk: legacy bootconsole [pl11] disabled Apr 17 01:02:36.064545 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 17 01:02:36.064550 kernel: ACPI: CPU0 has been hot-added Apr 17 01:02:36.064555 kernel: ACPI: CPU1 has been hot-added Apr 17 01:02:36.064559 kernel: iommu: Default domain type: Translated Apr 17 01:02:36.064564 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 01:02:36.064569 kernel: efivars: Registered efivars operations Apr 17 01:02:36.064574 kernel: vgaarb: loaded Apr 17 01:02:36.064579 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 01:02:36.064584 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 01:02:36.064589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 01:02:36.064593 kernel: pnp: PnP ACPI init Apr 17 01:02:36.064598 kernel: pnp: PnP ACPI: found 0 devices Apr 17 01:02:36.064603 kernel: NET: Registered PF_INET protocol family Apr 17 01:02:36.064607 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 01:02:36.064612 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 01:02:36.064618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 01:02:36.064623 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 01:02:36.064627 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 01:02:36.064632 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 01:02:36.064637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 01:02:36.064642 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 01:02:36.064646 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 01:02:36.064651 kernel: PCI: CLS 0 bytes, default 64 Apr 17 01:02:36.064656 kernel: kvm [1]: HYP mode not available Apr 17 01:02:36.064661 kernel: Initialise system trusted keyrings Apr 17 01:02:36.064666 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 01:02:36.064671 kernel: Key type asymmetric registered Apr 17 01:02:36.064675 kernel: Asymmetric key parser 'x509' registered Apr 17 01:02:36.064680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 17 01:02:36.064685 kernel: io scheduler mq-deadline registered Apr 17 01:02:36.064689 kernel: io scheduler kyber registered Apr 17 01:02:36.064694 kernel: io scheduler bfq registered Apr 17 01:02:36.064699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 01:02:36.064704 kernel: thunder_xcv, ver 1.0 Apr 17 01:02:36.064709 kernel: thunder_bgx, ver 1.0 Apr 17 01:02:36.064714 kernel: nicpf, ver 1.0 Apr 17 01:02:36.064718 kernel: nicvf, ver 1.0 Apr 17 01:02:36.064834 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 01:02:36.064891 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T01:02:35 UTC (1776387755) Apr 17 01:02:36.064898 kernel: efifb: probing for efifb Apr 17 01:02:36.064906 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 17 01:02:36.064912 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 17 01:02:36.064917 kernel: efifb: scrolling: redraw Apr 17 01:02:36.064923 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 01:02:36.064928 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 01:02:36.064933 kernel: fb0: EFI VGA frame buffer device Apr 17 01:02:36.064938 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 17 01:02:36.064942 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 01:02:36.064947 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 17 01:02:36.064953 kernel: watchdog: NMI not fully supported Apr 17 01:02:36.064958 kernel: watchdog: Hard watchdog permanently disabled Apr 17 01:02:36.064963 kernel: NET: Registered PF_INET6 protocol family Apr 17 01:02:36.064969 kernel: Segment Routing with IPv6 Apr 17 01:02:36.064975 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 01:02:36.064980 kernel: NET: Registered PF_PACKET protocol family Apr 17 01:02:36.064986 kernel: Key type dns_resolver registered Apr 17 01:02:36.064992 kernel: registered taskstats version 1 Apr 17 01:02:36.064997 kernel: Loading compiled-in X.509 certificates Apr 17 01:02:36.065003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 17 01:02:36.065010 kernel: Demotion targets for Node 0: null Apr 17 01:02:36.065015 kernel: Key type .fscrypt registered Apr 17 01:02:36.065021 kernel: Key type fscrypt-provisioning registered Apr 17 01:02:36.065026 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 01:02:36.065032 kernel: ima: Allocated hash algorithm: sha1 Apr 17 01:02:36.065037 kernel: ima: No architecture policies found Apr 17 01:02:36.065042 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 01:02:36.065047 kernel: clk: Disabling unused clocks Apr 17 01:02:36.065051 kernel: PM: genpd: Disabling unused power domains Apr 17 01:02:36.065057 kernel: Warning: unable to open an initial console. Apr 17 01:02:36.065062 kernel: Freeing unused kernel memory: 39552K Apr 17 01:02:36.065067 kernel: Run /init as init process Apr 17 01:02:36.065072 kernel: with arguments: Apr 17 01:02:36.065086 kernel: /init Apr 17 01:02:36.065091 kernel: with environment: Apr 17 01:02:36.065096 kernel: HOME=/ Apr 17 01:02:36.065101 kernel: TERM=linux Apr 17 01:02:36.065107 systemd[1]: Successfully made /usr/ read-only. Apr 17 01:02:36.065115 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 01:02:36.065120 systemd[1]: Detected virtualization microsoft. Apr 17 01:02:36.065125 systemd[1]: Detected architecture arm64. Apr 17 01:02:36.065130 systemd[1]: Running in initrd. Apr 17 01:02:36.065135 systemd[1]: No hostname configured, using default hostname. Apr 17 01:02:36.065141 systemd[1]: Hostname set to . Apr 17 01:02:36.065146 systemd[1]: Initializing machine ID from random generator. Apr 17 01:02:36.065152 systemd[1]: Queued start job for default target initrd.target. Apr 17 01:02:36.065157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:02:36.065162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:02:36.065168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 01:02:36.065173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 01:02:36.065178 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 01:02:36.065184 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 01:02:36.065191 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 01:02:36.065196 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 01:02:36.065202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:02:36.065207 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:02:36.065212 systemd[1]: Reached target paths.target - Path Units. Apr 17 01:02:36.065217 systemd[1]: Reached target slices.target - Slice Units. Apr 17 01:02:36.065222 systemd[1]: Reached target swap.target - Swaps. Apr 17 01:02:36.065227 systemd[1]: Reached target timers.target - Timer Units. Apr 17 01:02:36.065233 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 01:02:36.065238 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 01:02:36.065244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 01:02:36.065249 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 01:02:36.065254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:02:36.065259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 01:02:36.065264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 01:02:36.065269 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 01:02:36.065275 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 01:02:36.065281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 01:02:36.065286 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 01:02:36.065291 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 01:02:36.065297 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 01:02:36.065302 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 01:02:36.065307 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 01:02:36.065325 systemd-journald[226]: Collecting audit messages is disabled. Apr 17 01:02:36.065340 systemd-journald[226]: Journal started Apr 17 01:02:36.065353 systemd-journald[226]: Runtime Journal (/run/log/journal/eaf87fac7716405bbf44aef503645b05) is 8M, max 78.3M, 70.3M free. Apr 17 01:02:36.069109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:36.073986 systemd-modules-load[228]: Inserted module 'overlay' Apr 17 01:02:36.100141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 01:02:36.100172 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 01:02:36.100182 kernel: Bridge firewalling registered Apr 17 01:02:36.102735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 01:02:36.104133 systemd-modules-load[228]: Inserted module 'br_netfilter' Apr 17 01:02:36.115330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:02:36.120709 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 01:02:36.129118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 01:02:36.136719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:36.147770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 01:02:36.162911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 01:02:36.172707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 01:02:36.184232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 01:02:36.203302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:02:36.212295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 01:02:36.219614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 01:02:36.226864 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 01:02:36.250170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 01:02:36.256101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:02:36.272962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:02:36.279408 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 01:02:36.295809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 01:02:36.316103 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 17 01:02:36.349025 systemd-resolved[266]: Positive Trust Anchors: Apr 17 01:02:36.349038 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 01:02:36.349057 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 01:02:36.350762 systemd-resolved[266]: Defaulting to hostname 'linux'. Apr 17 01:02:36.351429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 01:02:36.357554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:02:36.460100 kernel: SCSI subsystem initialized Apr 17 01:02:36.467092 kernel: Loading iSCSI transport class v2.0-870. Apr 17 01:02:36.473103 kernel: iscsi: registered transport (tcp) Apr 17 01:02:36.486074 kernel: iscsi: registered transport (qla4xxx) Apr 17 01:02:36.486110 kernel: QLogic iSCSI HBA Driver Apr 17 01:02:36.498683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 01:02:36.523333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 01:02:36.535453 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 01:02:36.577108 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 01:02:36.582673 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 01:02:36.643095 kernel: raid6: neonx8 gen() 18572 MB/s Apr 17 01:02:36.657084 kernel: raid6: neonx4 gen() 18549 MB/s Apr 17 01:02:36.676086 kernel: raid6: neonx2 gen() 17043 MB/s Apr 17 01:02:36.696087 kernel: raid6: neonx1 gen() 15133 MB/s Apr 17 01:02:36.715087 kernel: raid6: int64x8 gen() 10542 MB/s Apr 17 01:02:36.734086 kernel: raid6: int64x4 gen() 10615 MB/s Apr 17 01:02:36.754088 kernel: raid6: int64x2 gen() 8997 MB/s Apr 17 01:02:36.775544 kernel: raid6: int64x1 gen() 7038 MB/s Apr 17 01:02:36.775554 kernel: raid6: using algorithm neonx8 gen() 18572 MB/s Apr 17 01:02:36.797456 kernel: raid6: .... xor() 14912 MB/s, rmw enabled Apr 17 01:02:36.797463 kernel: raid6: using neon recovery algorithm Apr 17 01:02:36.805504 kernel: xor: measuring software checksum speed Apr 17 01:02:36.805514 kernel: 8regs : 28633 MB/sec Apr 17 01:02:36.808247 kernel: 32regs : 28781 MB/sec Apr 17 01:02:36.811529 kernel: arm64_neon : 37676 MB/sec Apr 17 01:02:36.814644 kernel: xor: using function: arm64_neon (37676 MB/sec) Apr 17 01:02:36.852114 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 01:02:36.857485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 01:02:36.866379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:02:36.896115 systemd-udevd[474]: Using default interface naming scheme 'v255'. Apr 17 01:02:36.900171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:02:36.912040 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 01:02:36.936546 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Apr 17 01:02:36.959022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 01:02:36.968854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 01:02:37.014089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:02:37.024377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 01:02:37.085098 kernel: hv_vmbus: Vmbus version:5.3 Apr 17 01:02:37.085482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:02:37.089572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:37.101236 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:37.113388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:37.138932 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 17 01:02:37.138964 kernel: hv_vmbus: registering driver hv_storvsc Apr 17 01:02:37.138972 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 17 01:02:37.138985 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 17 01:02:37.146620 kernel: hv_vmbus: registering driver hv_netvsc Apr 17 01:02:37.146650 kernel: hv_vmbus: registering driver hid_hyperv Apr 17 01:02:37.146659 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 17 01:02:37.157113 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 17 01:02:37.157145 kernel: scsi host1: storvsc_host_t Apr 17 01:02:37.160152 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 17 01:02:37.167701 kernel: scsi host0: storvsc_host_t Apr 17 01:02:37.165919 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:02:37.181840 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 17 01:02:37.165999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:37.181924 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 01:02:37.187197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:37.207543 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 17 01:02:37.212092 kernel: PTP clock support registered Apr 17 01:02:37.222391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:37.754551 kernel: hv_utils: Registering HyperV Utility Driver Apr 17 01:02:37.754570 kernel: hv_vmbus: registering driver hv_utils Apr 17 01:02:37.754577 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 17 01:02:37.759538 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 17 01:02:37.759659 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 01:02:37.759666 kernel: hv_utils: Heartbeat IC version 3.0 Apr 17 01:02:37.759673 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 17 01:02:37.759751 kernel: hv_utils: Shutdown IC version 3.2 Apr 17 01:02:37.759758 kernel: hv_utils: TimeSync IC version 4.0 Apr 17 01:02:37.759764 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 17 01:02:37.759835 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 01:02:37.759898 kernel: hv_netvsc 7ced8d79-b4a5-7ced-8d79-b4a57ced8d79 eth0: VF slot 1 added Apr 17 01:02:37.759973 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 17 01:02:37.760035 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 17 01:02:37.742905 systemd-resolved[266]: Clock change detected. Flushing caches. Apr 17 01:02:37.776184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 01:02:37.776213 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 01:02:37.788545 kernel: hv_vmbus: registering driver hv_pci Apr 17 01:02:37.788583 kernel: hv_pci 8fd52303-f49f-4f2c-af10-b8942996faa7: PCI VMBus probing: Using version 0x10004 Apr 17 01:02:37.806148 kernel: hv_pci 8fd52303-f49f-4f2c-af10-b8942996faa7: PCI host bridge to bus f49f:00 Apr 17 01:02:37.806293 kernel: pci_bus f49f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 17 01:02:37.806373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 01:02:37.806435 kernel: pci_bus f49f:00: No busn resource found for root bus, will use [bus 00-ff] Apr 17 01:02:37.813113 kernel: pci f49f:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Apr 17 01:02:37.826138 kernel: pci f49f:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 17 01:02:37.840106 kernel: pci f49f:00:02.0: enabling Extended Tags Apr 17 01:02:37.840170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 01:02:37.860942 kernel: pci f49f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f49f:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Apr 17 01:02:37.869653 kernel: pci_bus f49f:00: busn_res: [bus 00-ff] end is updated to 00 Apr 17 01:02:37.869778 kernel: pci f49f:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Apr 17 01:02:37.931102 kernel: mlx5_core f49f:00:02.0: enabling device (0000 -> 0002) Apr 17 01:02:37.939086 kernel: mlx5_core f49f:00:02.0: PTM is not supported by PCIe Apr 17 01:02:37.939206 kernel: mlx5_core f49f:00:02.0: firmware version: 16.30.5026 Apr 17 01:02:38.113422 kernel: hv_netvsc 7ced8d79-b4a5-7ced-8d79-b4a57ced8d79 eth0: VF registering: eth1 Apr 17 01:02:38.113631 kernel: mlx5_core f49f:00:02.0 eth1: joined to eth0 Apr 17 01:02:38.119033 kernel: mlx5_core f49f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 17 01:02:38.130150 kernel: mlx5_core f49f:00:02.0 enP62623s1: renamed from eth1 Apr 17 01:02:38.343994 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 17 01:02:38.403064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 17 01:02:38.408129 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 17 01:02:38.428232 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 01:02:38.442456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 01:02:38.483978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 17 01:02:38.672450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 01:02:38.677924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 01:02:38.687265 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:02:38.697778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 01:02:38.712226 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 01:02:38.737141 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 01:02:39.483176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 01:02:39.483231 disk-uuid[636]: The operation has completed successfully. Apr 17 01:02:39.550937 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 01:02:39.551035 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 01:02:39.585966 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 01:02:39.603354 sh[825]: Success Apr 17 01:02:39.636670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 01:02:39.636735 kernel: device-mapper: uevent: version 1.0.3 Apr 17 01:02:39.641787 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 01:02:39.652113 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 17 01:02:39.922618 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 01:02:39.935267 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 01:02:39.959388 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 01:02:39.988150 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (843) Apr 17 01:02:40.001573 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 17 01:02:40.001606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 01:02:40.305181 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 01:02:40.305257 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 01:02:40.342887 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 01:02:40.349119 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 01:02:40.360197 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 01:02:40.360937 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 01:02:40.390140 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 01:02:40.426118 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (866) Apr 17 01:02:40.441417 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 17 01:02:40.441466 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 01:02:40.469115 kernel: BTRFS info (device sda6): turning on async discard Apr 17 01:02:40.469183 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 01:02:40.480119 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 17 01:02:40.481972 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 01:02:40.495294 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 01:02:40.536429 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 01:02:40.552934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 01:02:40.590564 systemd-networkd[1012]: lo: Link UP Apr 17 01:02:40.590576 systemd-networkd[1012]: lo: Gained carrier Apr 17 01:02:40.591676 systemd-networkd[1012]: Enumeration completed Apr 17 01:02:40.594064 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 01:02:40.594376 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 01:02:40.594379 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 01:02:40.599902 systemd[1]: Reached target network.target - Network. Apr 17 01:02:40.673114 kernel: mlx5_core f49f:00:02.0 enP62623s1: Link up Apr 17 01:02:40.706740 systemd-networkd[1012]: enP62623s1: Link UP Apr 17 01:02:40.711164 kernel: hv_netvsc 7ced8d79-b4a5-7ced-8d79-b4a57ced8d79 eth0: Data path switched to VF: enP62623s1 Apr 17 01:02:40.706801 systemd-networkd[1012]: eth0: Link UP Apr 17 01:02:40.706897 systemd-networkd[1012]: eth0: Gained carrier Apr 17 01:02:40.706910 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 01:02:40.712260 systemd-networkd[1012]: enP62623s1: Gained carrier Apr 17 01:02:40.738137 systemd-networkd[1012]: eth0: DHCPv4 address 10.0.0.24/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 01:02:41.455675 ignition[963]: Ignition 2.22.0 Apr 17 01:02:41.455692 ignition[963]: Stage: fetch-offline Apr 17 01:02:41.460400 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 01:02:41.455787 ignition[963]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:41.470526 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 01:02:41.455793 ignition[963]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:41.455862 ignition[963]: parsed url from cmdline: "" Apr 17 01:02:41.455865 ignition[963]: no config URL provided Apr 17 01:02:41.455868 ignition[963]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 01:02:41.455874 ignition[963]: no config at "/usr/lib/ignition/user.ign" Apr 17 01:02:41.455878 ignition[963]: failed to fetch config: resource requires networking Apr 17 01:02:41.456172 ignition[963]: Ignition finished successfully Apr 17 01:02:41.512260 ignition[1022]: Ignition 2.22.0 Apr 17 01:02:41.512265 ignition[1022]: Stage: fetch Apr 17 01:02:41.512439 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:41.512446 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:41.512526 ignition[1022]: parsed url from cmdline: "" Apr 17 01:02:41.512529 ignition[1022]: no config URL provided Apr 17 01:02:41.512532 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 01:02:41.512537 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Apr 17 01:02:41.512552 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 17 01:02:41.636992 ignition[1022]: GET result: OK Apr 17 01:02:41.637075 ignition[1022]: config has been read from IMDS userdata Apr 17 01:02:41.640282 unknown[1022]: fetched base config from "system" Apr 17 01:02:41.637117 ignition[1022]: parsing config with SHA512: d6745e7cec679bc924e2685fddbdfe951728a877d08c73fedea3d5559afc34ef5dd6e1bf8c9a4f0b0422abc8d834e7a5fa7dcd4a821b0bce8dae01bc04310c97 Apr 17 01:02:41.640288 unknown[1022]: fetched base config from "system" Apr 17 01:02:41.640512 ignition[1022]: fetch: fetch complete Apr 17 01:02:41.640291 unknown[1022]: fetched user config from "azure" Apr 17 01:02:41.640515 ignition[1022]: fetch: fetch passed Apr 17 01:02:41.649904 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 01:02:41.640566 ignition[1022]: Ignition finished successfully Apr 17 01:02:41.658232 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 01:02:41.702487 ignition[1029]: Ignition 2.22.0 Apr 17 01:02:41.702504 ignition[1029]: Stage: kargs Apr 17 01:02:41.710310 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 01:02:41.702660 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:41.717578 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 01:02:41.702667 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:41.703146 ignition[1029]: kargs: kargs passed Apr 17 01:02:41.703184 ignition[1029]: Ignition finished successfully Apr 17 01:02:41.760228 ignition[1035]: Ignition 2.22.0 Apr 17 01:02:41.760240 ignition[1035]: Stage: disks Apr 17 01:02:41.762332 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 01:02:41.760414 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:41.770564 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 01:02:41.760421 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:41.780417 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 01:02:41.760988 ignition[1035]: disks: disks passed Apr 17 01:02:41.794086 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 01:02:41.761032 ignition[1035]: Ignition finished successfully Apr 17 01:02:41.805304 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 01:02:41.817451 systemd[1]: Reached target basic.target - Basic System. Apr 17 01:02:41.829958 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 01:02:42.001417 systemd-networkd[1012]: eth0: Gained IPv6LL Apr 17 01:02:42.027681 systemd-fsck[1043]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Apr 17 01:02:42.037267 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 01:02:42.052687 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 01:02:42.315127 kernel: EXT4-fs (sda9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 17 01:02:42.315123 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 01:02:42.319716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 01:02:42.347930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 01:02:42.365496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 01:02:42.381660 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 01:02:42.417467 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1057) Apr 17 01:02:42.417486 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 17 01:02:42.417494 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 01:02:42.395939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 01:02:42.395969 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 01:02:42.451446 kernel: BTRFS info (device sda6): turning on async discard Apr 17 01:02:42.451465 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 01:02:42.447123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 01:02:42.456388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 01:02:42.462698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 01:02:42.976313 coreos-metadata[1059]: Apr 17 01:02:42.976 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 01:02:42.985110 coreos-metadata[1059]: Apr 17 01:02:42.985 INFO Fetch successful Apr 17 01:02:42.991319 coreos-metadata[1059]: Apr 17 01:02:42.986 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 17 01:02:43.002403 coreos-metadata[1059]: Apr 17 01:02:42.994 INFO Fetch successful Apr 17 01:02:43.010841 coreos-metadata[1059]: Apr 17 01:02:43.010 INFO wrote hostname ci-4459.2.4-n-25f3036c32 to /sysroot/etc/hostname Apr 17 01:02:43.020558 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 01:02:43.228951 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 01:02:43.286107 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Apr 17 01:02:43.295123 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 01:02:43.302324 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 01:02:44.690680 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 01:02:44.698027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 01:02:44.717738 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 01:02:44.731669 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 01:02:44.743167 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 17 01:02:44.763124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 01:02:44.776130 ignition[1176]: INFO : Ignition 2.22.0 Apr 17 01:02:44.776130 ignition[1176]: INFO : Stage: mount Apr 17 01:02:44.776130 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:44.776130 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:44.808442 ignition[1176]: INFO : mount: mount passed Apr 17 01:02:44.808442 ignition[1176]: INFO : Ignition finished successfully Apr 17 01:02:44.778319 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 01:02:44.790655 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 01:02:44.819297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 01:02:44.852227 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Apr 17 01:02:44.852262 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 17 01:02:44.864629 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 01:02:44.875630 kernel: BTRFS info (device sda6): turning on async discard Apr 17 01:02:44.875655 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 01:02:44.877682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 01:02:44.904154 ignition[1205]: INFO : Ignition 2.22.0 Apr 17 01:02:44.904154 ignition[1205]: INFO : Stage: files Apr 17 01:02:44.911806 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:44.911806 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:44.911806 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Apr 17 01:02:44.911806 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 01:02:44.911806 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 01:02:45.151880 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 01:02:45.158608 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 01:02:45.158608 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 01:02:45.152224 unknown[1205]: wrote ssh authorized keys file for user: core Apr 17 01:02:45.211658 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 01:02:45.221470 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 01:02:45.265630 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 01:02:45.639608 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 01:02:45.639608 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 01:02:45.658473 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 17 01:02:45.945581 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 01:02:46.486689 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 01:02:46.486689 ignition[1205]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 01:02:46.534008 ignition[1205]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 01:02:46.545571 ignition[1205]: INFO : files: files passed Apr 17 01:02:46.545571 ignition[1205]: INFO : Ignition finished successfully Apr 17 01:02:46.545775 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 01:02:46.563384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 01:02:46.591667 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 01:02:46.611934 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 01:02:46.616556 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 01:02:46.657646 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:02:46.657646 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:02:46.675384 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:02:46.668369 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 01:02:46.683036 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 01:02:46.698525 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 01:02:46.751769 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 01:02:46.751870 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 01:02:46.763204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 01:02:46.775071 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 01:02:46.785636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 01:02:46.786332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 01:02:46.823894 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 01:02:46.831990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 01:02:46.859316 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:02:46.864770 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:02:46.876810 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 01:02:46.887950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 01:02:46.888044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 01:02:46.901797 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 01:02:46.907452 systemd[1]: Stopped target basic.target - Basic System. Apr 17 01:02:46.919137 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 01:02:46.931296 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 01:02:46.942748 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 01:02:46.955075 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 01:02:46.967297 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 01:02:46.978595 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 01:02:46.991074 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 01:02:47.002200 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 01:02:47.012778 systemd[1]: Stopped target swap.target - Swaps. Apr 17 01:02:47.022091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 01:02:47.022210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 01:02:47.037483 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:02:47.042975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:02:47.054530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 01:02:47.060837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:02:47.067297 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 01:02:47.067391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 01:02:47.081590 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 01:02:47.081671 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 01:02:47.088453 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 01:02:47.088521 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 01:02:47.099258 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 01:02:47.185148 ignition[1258]: INFO : Ignition 2.22.0 Apr 17 01:02:47.185148 ignition[1258]: INFO : Stage: umount Apr 17 01:02:47.185148 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:02:47.185148 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 01:02:47.185148 ignition[1258]: INFO : umount: umount passed Apr 17 01:02:47.185148 ignition[1258]: INFO : Ignition finished successfully Apr 17 01:02:47.099331 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 01:02:47.113311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 01:02:47.130838 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 01:02:47.130951 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:02:47.162280 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 01:02:47.178806 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 01:02:47.179239 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:02:47.186020 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 01:02:47.186090 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 01:02:47.198331 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 01:02:47.198414 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 01:02:47.208437 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 01:02:47.208620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 01:02:47.220619 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 01:02:47.220666 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 01:02:47.233387 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 01:02:47.233428 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 01:02:47.243241 systemd[1]: Stopped target network.target - Network. Apr 17 01:02:47.256171 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 01:02:47.256226 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 01:02:47.270600 systemd[1]: Stopped target paths.target - Path Units. Apr 17 01:02:47.279404 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 01:02:47.289135 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:02:47.296169 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 01:02:47.304939 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 01:02:47.314919 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 01:02:47.314995 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 01:02:47.325898 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 01:02:47.325937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 01:02:47.335717 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 01:02:47.335775 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 01:02:47.346123 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 01:02:47.346153 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 01:02:47.355681 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 01:02:47.365512 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 01:02:47.594184 kernel: hv_netvsc 7ced8d79-b4a5-7ced-8d79-b4a57ced8d79 eth0: Data path switched from VF: enP62623s1 Apr 17 01:02:47.377549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 01:02:47.378077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 01:02:47.378153 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 01:02:47.389311 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 01:02:47.389386 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 01:02:47.399879 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 01:02:47.400049 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 01:02:47.400161 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 01:02:47.414329 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 01:02:47.416199 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 01:02:47.426032 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 01:02:47.426074 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:02:47.441031 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 01:02:47.453481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 01:02:47.453534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 01:02:47.463525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 01:02:47.463565 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:02:47.481719 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 01:02:47.481758 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 01:02:47.488172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 01:02:47.488213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:02:47.502978 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:02:47.511761 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 01:02:47.511810 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 01:02:47.537798 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 01:02:47.537976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:02:47.549454 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 01:02:47.549487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 01:02:47.559704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 01:02:47.559737 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 01:02:47.570474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 01:02:47.570514 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 01:02:47.594268 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 01:02:47.594315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 01:02:47.604051 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 01:02:47.604092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 01:02:47.628258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 01:02:47.651064 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 01:02:47.651129 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 01:02:47.669428 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 01:02:47.669471 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:02:47.681594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:02:47.681629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:47.693058 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 01:02:47.693113 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 01:02:47.693137 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 01:02:47.693374 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 01:02:47.693460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 01:02:47.704557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 01:02:47.704627 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 01:02:47.925329 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 01:02:47.925453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 01:02:47.933993 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 01:02:47.944197 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 01:02:47.944256 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 01:02:47.954257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 01:02:47.983487 systemd[1]: Switching root. Apr 17 01:02:48.086655 systemd-journald[226]: Journal stopped Apr 17 01:02:52.766027 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Apr 17 01:02:52.766045 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 01:02:52.766053 kernel: SELinux: policy capability open_perms=1 Apr 17 01:02:52.766058 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 01:02:52.766064 kernel: SELinux: policy capability always_check_network=0 Apr 17 01:02:52.766069 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 01:02:52.766075 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 01:02:52.766081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 01:02:52.766087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 01:02:52.766092 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 01:02:52.768139 kernel: audit: type=1403 audit(1776387768.792:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 01:02:52.768164 systemd[1]: Successfully loaded SELinux policy in 136.949ms. Apr 17 01:02:52.768172 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.443ms. Apr 17 01:02:52.768180 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 01:02:52.768187 systemd[1]: Detected virtualization microsoft. Apr 17 01:02:52.768194 systemd[1]: Detected architecture arm64. Apr 17 01:02:52.768200 systemd[1]: Detected first boot. Apr 17 01:02:52.768207 systemd[1]: Hostname set to . Apr 17 01:02:52.768213 systemd[1]: Initializing machine ID from random generator. Apr 17 01:02:52.768219 zram_generator::config[1301]: No configuration found. Apr 17 01:02:52.768225 kernel: NET: Registered PF_VSOCK protocol family Apr 17 01:02:52.768231 systemd[1]: Populated /etc with preset unit settings. Apr 17 01:02:52.768237 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 01:02:52.768245 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 01:02:52.768251 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 01:02:52.768257 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 01:02:52.768263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 01:02:52.768269 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 01:02:52.768275 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 01:02:52.768281 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 01:02:52.768289 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 01:02:52.768295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 01:02:52.768301 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 01:02:52.768307 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 01:02:52.768317 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:02:52.768323 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:02:52.768329 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 01:02:52.768335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 01:02:52.768343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 01:02:52.768349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 01:02:52.768357 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 17 01:02:52.768363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:02:52.768369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:02:52.768375 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 01:02:52.768382 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 01:02:52.768388 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 01:02:52.768395 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 01:02:52.768401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:02:52.768407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 01:02:52.768413 systemd[1]: Reached target slices.target - Slice Units. Apr 17 01:02:52.768419 systemd[1]: Reached target swap.target - Swaps. Apr 17 01:02:52.768425 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 01:02:52.768432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 01:02:52.768439 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 01:02:52.768446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:02:52.768453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 01:02:52.768459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 01:02:52.768465 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 01:02:52.768472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 01:02:52.768479 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 01:02:52.768485 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 01:02:52.768491 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 01:02:52.768498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 01:02:52.768504 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 01:02:52.768511 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 01:02:52.768517 systemd[1]: Reached target machines.target - Containers. Apr 17 01:02:52.768523 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 01:02:52.768530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 01:02:52.768537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 01:02:52.768543 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 01:02:52.768549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 01:02:52.768555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 01:02:52.768562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 01:02:52.768568 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 01:02:52.768574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 01:02:52.768581 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 01:02:52.768589 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 01:02:52.768595 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 01:02:52.768601 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 01:02:52.768607 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 01:02:52.768614 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 01:02:52.768620 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 01:02:52.768626 kernel: fuse: init (API version 7.41) Apr 17 01:02:52.768633 kernel: loop: module loaded Apr 17 01:02:52.768639 kernel: ACPI: bus type drm_connector registered Apr 17 01:02:52.768645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 01:02:52.768651 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 01:02:52.768657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 01:02:52.768664 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 01:02:52.768670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 01:02:52.768702 systemd-journald[1391]: Collecting audit messages is disabled. Apr 17 01:02:52.768717 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 01:02:52.768724 systemd-journald[1391]: Journal started Apr 17 01:02:52.768740 systemd-journald[1391]: Runtime Journal (/run/log/journal/f3847d34f1964207a79e98c3e48f50c9) is 8M, max 78.3M, 70.3M free. Apr 17 01:02:51.840014 systemd[1]: Queued start job for default target multi-user.target. Apr 17 01:02:51.847642 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 01:02:51.847959 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 01:02:51.848247 systemd[1]: systemd-journald.service: Consumed 2.967s CPU time. Apr 17 01:02:52.773089 systemd[1]: Stopped verity-setup.service. Apr 17 01:02:52.791080 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 01:02:52.791824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 01:02:52.798938 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 01:02:52.805400 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 01:02:52.811519 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 01:02:52.817878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 01:02:52.824464 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 01:02:52.828705 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 01:02:52.833991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:02:52.840223 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 01:02:52.840350 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 01:02:52.845989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 01:02:52.846190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 01:02:52.851369 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 01:02:52.851479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 01:02:52.856914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 01:02:52.857031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 01:02:52.864075 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 01:02:52.864308 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 01:02:52.871292 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 01:02:52.871406 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 01:02:52.877387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 01:02:52.883457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 01:02:52.890888 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 01:02:52.899729 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 01:02:52.907724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:02:52.922893 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 01:02:52.930797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 01:02:52.944649 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 01:02:52.950910 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 01:02:52.950935 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 01:02:52.956905 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 01:02:52.963971 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 01:02:52.969299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 01:02:52.985172 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 01:02:52.998720 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 01:02:53.004789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 01:02:53.005451 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 01:02:53.011959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 01:02:53.014224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 01:02:53.020767 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 01:02:53.029216 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 01:02:53.037075 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 01:02:53.044822 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 01:02:53.055558 systemd-journald[1391]: Time spent on flushing to /var/log/journal/f3847d34f1964207a79e98c3e48f50c9 is 9.492ms for 929 entries. Apr 17 01:02:53.055558 systemd-journald[1391]: System Journal (/var/log/journal/f3847d34f1964207a79e98c3e48f50c9) is 8M, max 2.6G, 2.6G free. Apr 17 01:02:53.105592 systemd-journald[1391]: Received client request to flush runtime journal. Apr 17 01:02:53.105654 kernel: loop0: detected capacity change from 0 to 27936 Apr 17 01:02:53.064067 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 01:02:53.074242 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 01:02:53.086005 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 01:02:53.107241 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 01:02:53.150908 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 01:02:53.154838 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 01:02:53.162633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:02:53.188876 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 01:02:53.195190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 01:02:53.383202 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Apr 17 01:02:53.383216 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Apr 17 01:02:53.387525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:02:53.492126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 01:02:53.550607 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 01:02:53.560225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:02:53.582110 kernel: loop1: detected capacity change from 0 to 119840 Apr 17 01:02:53.593781 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Apr 17 01:02:53.856850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:02:53.871579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 01:02:53.928279 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 01:02:53.979972 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 17 01:02:53.986831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 01:02:54.026175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 01:02:54.044048 kernel: loop2: detected capacity change from 0 to 100632 Apr 17 01:02:54.044154 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 01:02:54.129713 systemd-networkd[1484]: lo: Link UP Apr 17 01:02:54.129724 systemd-networkd[1484]: lo: Gained carrier Apr 17 01:02:54.130772 systemd-networkd[1484]: Enumeration completed Apr 17 01:02:54.130861 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 01:02:54.136587 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 01:02:54.136594 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 01:02:54.140136 kernel: hv_vmbus: registering driver hv_balloon Apr 17 01:02:54.140202 kernel: hv_vmbus: registering driver hyperv_fb Apr 17 01:02:54.141211 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 01:02:54.147121 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 17 01:02:54.147180 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 17 01:02:54.163251 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 17 01:02:54.163313 kernel: Console: switching to colour dummy device 80x25 Apr 17 01:02:54.163327 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 17 01:02:54.179226 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 01:02:54.187427 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 01:02:54.200006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:54.214801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:02:54.215263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:54.228223 kernel: mlx5_core f49f:00:02.0 enP62623s1: Link up Apr 17 01:02:54.228721 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 01:02:54.231037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:02:54.258125 kernel: hv_netvsc 7ced8d79-b4a5-7ced-8d79-b4a57ced8d79 eth0: Data path switched to VF: enP62623s1 Apr 17 01:02:54.259761 systemd-networkd[1484]: enP62623s1: Link UP Apr 17 01:02:54.260192 systemd-networkd[1484]: eth0: Link UP Apr 17 01:02:54.260399 systemd-networkd[1484]: eth0: Gained carrier Apr 17 01:02:54.260905 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 01:02:54.262116 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 01:02:54.273598 systemd-networkd[1484]: enP62623s1: Gained carrier Apr 17 01:02:54.280174 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.24/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 01:02:54.324057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 01:02:54.333840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 01:02:54.346174 kernel: MACsec IEEE 802.1AE Apr 17 01:02:54.408425 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 01:02:54.566179 kernel: loop3: detected capacity change from 0 to 209336 Apr 17 01:02:54.613122 kernel: loop4: detected capacity change from 0 to 27936 Apr 17 01:02:54.628121 kernel: loop5: detected capacity change from 0 to 119840 Apr 17 01:02:54.642118 kernel: loop6: detected capacity change from 0 to 100632 Apr 17 01:02:54.656115 kernel: loop7: detected capacity change from 0 to 209336 Apr 17 01:02:54.672168 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 17 01:02:54.672561 (sd-merge)[1609]: Merged extensions into '/usr'. Apr 17 01:02:54.676129 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 01:02:54.676144 systemd[1]: Reloading... Apr 17 01:02:54.750196 zram_generator::config[1652]: No configuration found. Apr 17 01:02:54.899220 systemd[1]: Reloading finished in 222 ms. Apr 17 01:02:54.920075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:02:54.928758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 01:02:54.943001 systemd[1]: Starting ensure-sysext.service... Apr 17 01:02:54.950226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 01:02:54.963749 systemd[1]: Reload requested from client PID 1697 ('systemctl') (unit ensure-sysext.service)... Apr 17 01:02:54.963760 systemd[1]: Reloading... Apr 17 01:02:54.965166 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 01:02:54.965496 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 01:02:54.965886 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 01:02:54.966293 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 01:02:54.967399 systemd-tmpfiles[1698]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 01:02:54.967787 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Apr 17 01:02:54.967965 systemd-tmpfiles[1698]: ACLs are not supported, ignoring. Apr 17 01:02:55.007603 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 01:02:55.007742 systemd-tmpfiles[1698]: Skipping /boot Apr 17 01:02:55.014338 systemd-tmpfiles[1698]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 01:02:55.014433 systemd-tmpfiles[1698]: Skipping /boot Apr 17 01:02:55.037176 zram_generator::config[1732]: No configuration found. Apr 17 01:02:55.185708 systemd[1]: Reloading finished in 221 ms. Apr 17 01:02:55.197089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:02:55.220231 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 01:02:55.243398 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 01:02:55.253788 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 01:02:55.270557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 01:02:55.279244 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 01:02:55.294317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 01:02:55.297319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 01:02:55.308405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 01:02:55.319183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 01:02:55.328239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 01:02:55.328372 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 01:02:55.329794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 01:02:55.330498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 01:02:55.339577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 01:02:55.339695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 01:02:55.348596 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 01:02:55.348728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 01:02:55.357588 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 01:02:55.371906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 01:02:55.374572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 01:02:55.384991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 01:02:55.392699 systemd-resolved[1790]: Positive Trust Anchors: Apr 17 01:02:55.392714 systemd-resolved[1790]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 01:02:55.392733 systemd-resolved[1790]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 01:02:55.396849 systemd-resolved[1790]: Using system hostname 'ci-4459.2.4-n-25f3036c32'. Apr 17 01:02:55.405348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 01:02:55.412077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 01:02:55.412187 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 01:02:55.412721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 01:02:55.420749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 01:02:55.429176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 01:02:55.431237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 01:02:55.440058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 01:02:55.440199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 01:02:55.446253 systemd-networkd[1484]: eth0: Gained IPv6LL Apr 17 01:02:55.449385 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 01:02:55.451168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 01:02:55.459329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 01:02:55.475008 systemd[1]: Finished ensure-sysext.service. Apr 17 01:02:55.481507 systemd[1]: Reached target network.target - Network. Apr 17 01:02:55.486865 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 01:02:55.492753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:02:55.502015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 01:02:55.505268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 01:02:55.514992 augenrules[1828]: No rules Apr 17 01:02:55.517785 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 01:02:55.525367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 01:02:55.537331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 01:02:55.544762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 01:02:55.544882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 01:02:55.544971 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 01:02:55.553837 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 01:02:55.554144 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 01:02:55.561497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 01:02:55.561751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 01:02:55.570197 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 01:02:55.570423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 01:02:55.577711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 01:02:55.577962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 01:02:55.587532 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 01:02:55.587800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 01:02:55.597750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 01:02:55.597917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 01:02:55.922921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 01:02:55.931126 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 01:02:58.730859 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 01:02:58.743191 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 01:02:58.750533 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 01:02:58.768268 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 01:02:58.773525 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 01:02:58.778337 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 01:02:58.783474 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 01:02:58.789068 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 01:02:58.794400 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 01:02:58.800603 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 01:02:58.806692 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 01:02:58.806720 systemd[1]: Reached target paths.target - Path Units. Apr 17 01:02:58.810908 systemd[1]: Reached target timers.target - Timer Units. Apr 17 01:02:58.830507 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 01:02:58.837058 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 01:02:58.842572 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 01:02:58.847763 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 01:02:58.853383 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 01:02:58.860174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 01:02:58.866127 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 01:02:58.873224 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 01:02:58.878615 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 01:02:58.883456 systemd[1]: Reached target basic.target - Basic System. Apr 17 01:02:58.887854 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 01:02:58.887879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 01:02:58.890111 systemd[1]: Starting chronyd.service - NTP client/server... Apr 17 01:02:58.901189 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 01:02:58.907939 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 01:02:58.918661 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 01:02:58.924889 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 01:02:58.932311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 01:02:58.945010 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 01:02:58.946841 jq[1855]: false Apr 17 01:02:58.949757 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 01:02:58.953228 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 17 01:02:58.958865 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 17 01:02:58.959675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:02:58.969755 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 01:02:58.977264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 01:02:58.988011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 01:02:58.996416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 01:02:59.002621 KVP[1857]: KVP starting; pid is:1857 Apr 17 01:02:59.005849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 01:02:59.010959 chronyd[1847]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Apr 17 01:02:59.014453 KVP[1857]: KVP LIC Version: 3.1 Apr 17 01:02:59.014705 extend-filesystems[1856]: Found /dev/sda6 Apr 17 01:02:59.018036 kernel: hv_utils: KVP IC version 4.0 Apr 17 01:02:59.024270 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 01:02:59.041278 extend-filesystems[1856]: Found /dev/sda9 Apr 17 01:02:59.041278 extend-filesystems[1856]: Checking size of /dev/sda9 Apr 17 01:02:59.039031 chronyd[1847]: Timezone right/UTC failed leap second check, ignoring Apr 17 01:02:59.033285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 01:02:59.041263 chronyd[1847]: Loaded seccomp filter (level 2) Apr 17 01:02:59.034207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 01:02:59.034880 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 01:02:59.066056 jq[1883]: true Apr 17 01:02:59.047229 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 01:02:59.059787 systemd[1]: Started chronyd.service - NTP client/server. Apr 17 01:02:59.067627 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 01:02:59.075957 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 01:02:59.076124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 01:02:59.079214 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 01:02:59.081367 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 01:02:59.087687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 01:02:59.096625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 01:02:59.098141 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 01:02:59.123447 extend-filesystems[1856]: Old size kept for /dev/sda9 Apr 17 01:02:59.123362 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 01:02:59.140294 update_engine[1880]: I20260417 01:02:59.130836 1880 main.cc:92] Flatcar Update Engine starting Apr 17 01:02:59.130700 systemd-logind[1875]: New seat seat0. Apr 17 01:02:59.140608 jq[1895]: true Apr 17 01:02:59.131317 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 01:02:59.134446 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 01:02:59.135835 (ntainerd)[1896]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 01:02:59.145941 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 01:02:59.195565 tar[1891]: linux-arm64/LICENSE Apr 17 01:02:59.197479 tar[1891]: linux-arm64/helm Apr 17 01:02:59.285299 bash[1956]: Updated "/home/core/.ssh/authorized_keys" Apr 17 01:02:59.290139 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 01:02:59.301851 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 01:02:59.339689 dbus-daemon[1850]: [system] SELinux support is enabled Apr 17 01:02:59.339851 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 01:02:59.346661 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 01:02:59.346691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 01:02:59.348600 dbus-daemon[1850]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 01:02:59.353071 update_engine[1880]: I20260417 01:02:59.353020 1880 update_check_scheduler.cc:74] Next update check in 4m35s Apr 17 01:02:59.356208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 01:02:59.356231 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 01:02:59.363333 systemd[1]: Started update-engine.service - Update Engine. Apr 17 01:02:59.371306 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 01:02:59.423409 coreos-metadata[1849]: Apr 17 01:02:59.422 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 01:02:59.432545 coreos-metadata[1849]: Apr 17 01:02:59.432 INFO Fetch successful Apr 17 01:02:59.432748 coreos-metadata[1849]: Apr 17 01:02:59.432 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 17 01:02:59.437238 coreos-metadata[1849]: Apr 17 01:02:59.437 INFO Fetch successful Apr 17 01:02:59.437653 coreos-metadata[1849]: Apr 17 01:02:59.437 INFO Fetching http://168.63.129.16/machine/dfda3ecd-2e54-4299-bd8a-091f9197d20a/537a294c%2Dcebc%2D4ee6%2Dbcff%2D2249199241cd.%5Fci%2D4459.2.4%2Dn%2D25f3036c32?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 17 01:02:59.439208 coreos-metadata[1849]: Apr 17 01:02:59.438 INFO Fetch successful Apr 17 01:02:59.439459 coreos-metadata[1849]: Apr 17 01:02:59.439 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 17 01:02:59.448640 coreos-metadata[1849]: Apr 17 01:02:59.447 INFO Fetch successful Apr 17 01:02:59.482198 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 01:02:59.491261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 01:02:59.577112 sshd_keygen[1884]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 01:02:59.577572 tar[1891]: linux-arm64/README.md Apr 17 01:02:59.594974 locksmithd[1992]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 01:02:59.595170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 01:02:59.610251 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 01:02:59.618357 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 01:02:59.633236 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 17 01:02:59.640786 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 01:02:59.645464 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 01:02:59.656283 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 01:02:59.668159 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 17 01:02:59.677976 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 01:02:59.687493 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 01:02:59.695323 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 17 01:02:59.703240 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 01:02:59.939554 containerd[1896]: time="2026-04-17T01:02:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 01:02:59.940433 containerd[1896]: time="2026-04-17T01:02:59.940412512Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 01:02:59.947276 containerd[1896]: time="2026-04-17T01:02:59.947251264Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.136µs" Apr 17 01:02:59.947365 containerd[1896]: time="2026-04-17T01:02:59.947348608Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 01:02:59.947415 containerd[1896]: time="2026-04-17T01:02:59.947405232Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 01:02:59.947587 containerd[1896]: time="2026-04-17T01:02:59.947569080Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 01:02:59.947643 containerd[1896]: time="2026-04-17T01:02:59.947631704Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 01:02:59.947704 containerd[1896]: time="2026-04-17T01:02:59.947694384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 01:02:59.947810 containerd[1896]: time="2026-04-17T01:02:59.947795992Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 01:02:59.947886 containerd[1896]: time="2026-04-17T01:02:59.947873664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948137 containerd[1896]: time="2026-04-17T01:02:59.948117632Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948214 containerd[1896]: time="2026-04-17T01:02:59.948200720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948262 containerd[1896]: time="2026-04-17T01:02:59.948252080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948309 containerd[1896]: time="2026-04-17T01:02:59.948298512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948436 containerd[1896]: time="2026-04-17T01:02:59.948419504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948671 containerd[1896]: time="2026-04-17T01:02:59.948652376Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948749 containerd[1896]: time="2026-04-17T01:02:59.948734760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 01:02:59.948797 containerd[1896]: time="2026-04-17T01:02:59.948786272Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 01:02:59.948863 containerd[1896]: time="2026-04-17T01:02:59.948853872Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 01:02:59.949068 containerd[1896]: time="2026-04-17T01:02:59.949056560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 01:02:59.949405 containerd[1896]: time="2026-04-17T01:02:59.949386152Z" level=info msg="metadata content store policy set" policy=shared Apr 17 01:02:59.950150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:02:59.967659 containerd[1896]: time="2026-04-17T01:02:59.967616688Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967792304Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967814632Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967825824Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967839792Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967846824Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967854672Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 01:02:59.967889 containerd[1896]: time="2026-04-17T01:02:59.967866936Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 01:02:59.968117 containerd[1896]: time="2026-04-17T01:02:59.967875184Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 01:02:59.968117 containerd[1896]: time="2026-04-17T01:02:59.968059632Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 01:02:59.968117 containerd[1896]: time="2026-04-17T01:02:59.968070704Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 01:02:59.968117 containerd[1896]: time="2026-04-17T01:02:59.968080536Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 01:02:59.968351 containerd[1896]: time="2026-04-17T01:02:59.968330072Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 01:02:59.968416 containerd[1896]: time="2026-04-17T01:02:59.968404136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 01:02:59.968479 containerd[1896]: time="2026-04-17T01:02:59.968467888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 01:02:59.968519 containerd[1896]: time="2026-04-17T01:02:59.968509976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 01:02:59.968554 containerd[1896]: time="2026-04-17T01:02:59.968545136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 01:02:59.968589 containerd[1896]: time="2026-04-17T01:02:59.968580528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 01:02:59.968633 containerd[1896]: time="2026-04-17T01:02:59.968624208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 01:02:59.968679 containerd[1896]: time="2026-04-17T01:02:59.968669744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 01:02:59.968718 containerd[1896]: time="2026-04-17T01:02:59.968709224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 01:02:59.968751 containerd[1896]: time="2026-04-17T01:02:59.968742960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 01:02:59.968821 containerd[1896]: time="2026-04-17T01:02:59.968809264Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 01:02:59.968905 containerd[1896]: time="2026-04-17T01:02:59.968893392Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 01:02:59.968952 containerd[1896]: time="2026-04-17T01:02:59.968942128Z" level=info msg="Start snapshots syncer" Apr 17 01:02:59.969019 containerd[1896]: time="2026-04-17T01:02:59.969006880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 01:02:59.969341 containerd[1896]: time="2026-04-17T01:02:59.969308640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969488104Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969552136Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969663856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969687728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969695488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969702064Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969709976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969717256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969726520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969745152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969753256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969760848Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969803656Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 01:02:59.970129 containerd[1896]: time="2026-04-17T01:02:59.969816056Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969821872Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969827344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969831624Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969837200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969843552Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969856848Z" level=info msg="runtime interface created" Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969860152Z" level=info msg="created NRI interface" Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969865968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969874888Z" level=info msg="Connect containerd service" Apr 17 01:02:59.970338 containerd[1896]: time="2026-04-17T01:02:59.969889256Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 01:02:59.970894 containerd[1896]: time="2026-04-17T01:02:59.970863592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 01:03:00.074188 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:00.347395 containerd[1896]: time="2026-04-17T01:03:00.347271400Z" level=info msg="Start subscribing containerd event" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347454440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347536200Z" level=info msg="Start recovering state" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347588864Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347638072Z" level=info msg="Start event monitor" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347651424Z" level=info msg="Start cni network conf syncer for default" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347656952Z" level=info msg="Start streaming server" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347665336Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347670400Z" level=info msg="runtime interface starting up..." Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347673840Z" level=info msg="starting plugins..." Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347684144Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 01:03:00.347915 containerd[1896]: time="2026-04-17T01:03:00.347804936Z" level=info msg="containerd successfully booted in 0.408563s" Apr 17 01:03:00.348226 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 01:03:00.354863 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 01:03:00.363209 systemd[1]: Startup finished in 1.625s (kernel) + 12.549s (initrd) + 11.705s (userspace) = 25.881s. Apr 17 01:03:00.492118 kubelet[2040]: E0417 01:03:00.492058 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:00.494830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:00.494941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:00.495238 systemd[1]: kubelet.service: Consumed 523ms CPU time, 257.7M memory peak. Apr 17 01:03:00.693056 login[2028]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 17 01:03:00.694276 login[2030]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:00.699264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 01:03:00.700047 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 01:03:00.706103 systemd-logind[1875]: New session 1 of user core. Apr 17 01:03:00.737255 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 01:03:00.740193 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 01:03:00.746898 (systemd)[2066]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 01:03:00.748920 systemd-logind[1875]: New session c1 of user core. Apr 17 01:03:00.847647 systemd[2066]: Queued start job for default target default.target. Apr 17 01:03:00.853819 systemd[2066]: Created slice app.slice - User Application Slice. Apr 17 01:03:00.853939 systemd[2066]: Reached target paths.target - Paths. Apr 17 01:03:00.854050 systemd[2066]: Reached target timers.target - Timers. Apr 17 01:03:00.855125 systemd[2066]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 01:03:00.863180 systemd[2066]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 01:03:00.863317 systemd[2066]: Reached target sockets.target - Sockets. Apr 17 01:03:00.863409 systemd[2066]: Reached target basic.target - Basic System. Apr 17 01:03:00.863567 systemd[2066]: Reached target default.target - Main User Target. Apr 17 01:03:00.863645 systemd[2066]: Startup finished in 110ms. Apr 17 01:03:00.863755 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 01:03:00.864864 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 01:03:01.582061 waagent[2025]: 2026-04-17T01:03:01.581985Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Apr 17 01:03:01.591221 waagent[2025]: 2026-04-17T01:03:01.587132Z INFO Daemon Daemon OS: flatcar 4459.2.4 Apr 17 01:03:01.591417 waagent[2025]: 2026-04-17T01:03:01.591376Z INFO Daemon Daemon Python: 3.11.13 Apr 17 01:03:01.595690 waagent[2025]: 2026-04-17T01:03:01.595650Z INFO Daemon Daemon Run daemon Apr 17 01:03:01.599752 waagent[2025]: 2026-04-17T01:03:01.599716Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Apr 17 01:03:01.607812 waagent[2025]: 2026-04-17T01:03:01.607775Z INFO Daemon Daemon Using waagent for provisioning Apr 17 01:03:01.613194 waagent[2025]: 2026-04-17T01:03:01.613156Z INFO Daemon Daemon Activate resource disk Apr 17 01:03:01.617330 waagent[2025]: 2026-04-17T01:03:01.617297Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 17 01:03:01.626774 waagent[2025]: 2026-04-17T01:03:01.626735Z INFO Daemon Daemon Found device: None Apr 17 01:03:01.630772 waagent[2025]: 2026-04-17T01:03:01.630742Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 17 01:03:01.638631 waagent[2025]: 2026-04-17T01:03:01.638601Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 17 01:03:01.649680 waagent[2025]: 2026-04-17T01:03:01.649638Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 01:03:01.654740 waagent[2025]: 2026-04-17T01:03:01.654711Z INFO Daemon Daemon Running default provisioning handler Apr 17 01:03:01.665114 waagent[2025]: 2026-04-17T01:03:01.665052Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 17 01:03:01.677010 waagent[2025]: 2026-04-17T01:03:01.676966Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 17 01:03:01.685534 waagent[2025]: 2026-04-17T01:03:01.685500Z INFO Daemon Daemon cloud-init is enabled: False Apr 17 01:03:01.690240 waagent[2025]: 2026-04-17T01:03:01.690214Z INFO Daemon Daemon Copying ovf-env.xml Apr 17 01:03:01.694387 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:01.699243 systemd-logind[1875]: New session 2 of user core. Apr 17 01:03:01.704224 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 01:03:01.765971 waagent[2025]: 2026-04-17T01:03:01.764786Z INFO Daemon Daemon Successfully mounted dvd Apr 17 01:03:01.797513 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 17 01:03:01.799279 waagent[2025]: 2026-04-17T01:03:01.799214Z INFO Daemon Daemon Detect protocol endpoint Apr 17 01:03:01.803324 waagent[2025]: 2026-04-17T01:03:01.803284Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 01:03:01.808308 waagent[2025]: 2026-04-17T01:03:01.808271Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 17 01:03:01.813883 waagent[2025]: 2026-04-17T01:03:01.813852Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 17 01:03:01.818742 waagent[2025]: 2026-04-17T01:03:01.818706Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 17 01:03:01.823100 waagent[2025]: 2026-04-17T01:03:01.823069Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 17 01:03:01.870562 waagent[2025]: 2026-04-17T01:03:01.870484Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 17 01:03:01.876390 waagent[2025]: 2026-04-17T01:03:01.876364Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 17 01:03:01.881008 waagent[2025]: 2026-04-17T01:03:01.880977Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 17 01:03:01.991473 waagent[2025]: 2026-04-17T01:03:01.991387Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 17 01:03:01.997296 waagent[2025]: 2026-04-17T01:03:01.997254Z INFO Daemon Daemon Forcing an update of the goal state. Apr 17 01:03:02.004878 waagent[2025]: 2026-04-17T01:03:02.004838Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 01:03:02.024988 waagent[2025]: 2026-04-17T01:03:02.024953Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Apr 17 01:03:02.030188 waagent[2025]: 2026-04-17T01:03:02.030154Z INFO Daemon Apr 17 01:03:02.032545 waagent[2025]: 2026-04-17T01:03:02.032515Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b862509f-0ce1-4ffc-b834-789e8be7beb6 eTag: 13222533627115491248 source: Fabric] Apr 17 01:03:02.042361 waagent[2025]: 2026-04-17T01:03:02.042328Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 17 01:03:02.047792 waagent[2025]: 2026-04-17T01:03:02.047764Z INFO Daemon Apr 17 01:03:02.049949 waagent[2025]: 2026-04-17T01:03:02.049918Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 17 01:03:02.059211 waagent[2025]: 2026-04-17T01:03:02.059185Z INFO Daemon Daemon Downloading artifacts profile blob Apr 17 01:03:02.189662 waagent[2025]: 2026-04-17T01:03:02.189592Z INFO Daemon Downloaded certificate {'thumbprint': '1BF930B55125E4C3376A2AD9527EBC97B920BEF0', 'hasPrivateKey': True} Apr 17 01:03:02.197614 waagent[2025]: 2026-04-17T01:03:02.197575Z INFO Daemon Fetch goal state completed Apr 17 01:03:02.236532 waagent[2025]: 2026-04-17T01:03:02.236491Z INFO Daemon Daemon Starting provisioning Apr 17 01:03:02.240840 waagent[2025]: 2026-04-17T01:03:02.240802Z INFO Daemon Daemon Handle ovf-env.xml. Apr 17 01:03:02.245010 waagent[2025]: 2026-04-17T01:03:02.244978Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-25f3036c32] Apr 17 01:03:02.251629 waagent[2025]: 2026-04-17T01:03:02.251586Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-25f3036c32] Apr 17 01:03:02.256997 waagent[2025]: 2026-04-17T01:03:02.256960Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 17 01:03:02.262631 waagent[2025]: 2026-04-17T01:03:02.262599Z INFO Daemon Daemon Primary interface is [eth0] Apr 17 01:03:02.272819 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 01:03:02.272825 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 01:03:02.272876 systemd-networkd[1484]: eth0: DHCP lease lost Apr 17 01:03:02.274112 waagent[2025]: 2026-04-17T01:03:02.274053Z INFO Daemon Daemon Create user account if not exists Apr 17 01:03:02.278414 waagent[2025]: 2026-04-17T01:03:02.278381Z INFO Daemon Daemon User core already exists, skip useradd Apr 17 01:03:02.282880 waagent[2025]: 2026-04-17T01:03:02.282844Z INFO Daemon Daemon Configure sudoer Apr 17 01:03:02.286713 waagent[2025]: 2026-04-17T01:03:02.286675Z INFO Daemon Daemon Configure sshd Apr 17 01:03:02.290666 waagent[2025]: 2026-04-17T01:03:02.290629Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 17 01:03:02.304585 waagent[2025]: 2026-04-17T01:03:02.300656Z INFO Daemon Daemon Deploy ssh public key. Apr 17 01:03:02.313142 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.24/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 01:03:03.461244 waagent[2025]: 2026-04-17T01:03:03.461198Z INFO Daemon Daemon Provisioning complete Apr 17 01:03:03.473975 waagent[2025]: 2026-04-17T01:03:03.473935Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 17 01:03:03.479145 waagent[2025]: 2026-04-17T01:03:03.479108Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 17 01:03:03.486631 waagent[2025]: 2026-04-17T01:03:03.486604Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Apr 17 01:03:03.583842 waagent[2116]: 2026-04-17T01:03:03.583779Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Apr 17 01:03:03.585211 waagent[2116]: 2026-04-17T01:03:03.584226Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Apr 17 01:03:03.585211 waagent[2116]: 2026-04-17T01:03:03.584283Z INFO ExtHandler ExtHandler Python: 3.11.13 Apr 17 01:03:03.585211 waagent[2116]: 2026-04-17T01:03:03.584320Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 17 01:03:03.625976 waagent[2116]: 2026-04-17T01:03:03.625931Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Apr 17 01:03:03.626244 waagent[2116]: 2026-04-17T01:03:03.626217Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 01:03:03.626372 waagent[2116]: 2026-04-17T01:03:03.626349Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 01:03:03.631626 waagent[2116]: 2026-04-17T01:03:03.631583Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 01:03:03.636255 waagent[2116]: 2026-04-17T01:03:03.636224Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Apr 17 01:03:03.636678 waagent[2116]: 2026-04-17T01:03:03.636646Z INFO ExtHandler Apr 17 01:03:03.636796 waagent[2116]: 2026-04-17T01:03:03.636775Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 82d2e7f1-da2b-4e40-8032-88d119536f39 eTag: 13222533627115491248 source: Fabric] Apr 17 01:03:03.637089 waagent[2116]: 2026-04-17T01:03:03.637060Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 17 01:03:03.637617 waagent[2116]: 2026-04-17T01:03:03.637586Z INFO ExtHandler Apr 17 01:03:03.637746 waagent[2116]: 2026-04-17T01:03:03.637723Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 17 01:03:03.640650 waagent[2116]: 2026-04-17T01:03:03.640622Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 17 01:03:03.693645 waagent[2116]: 2026-04-17T01:03:03.693600Z INFO ExtHandler Downloaded certificate {'thumbprint': '1BF930B55125E4C3376A2AD9527EBC97B920BEF0', 'hasPrivateKey': True} Apr 17 01:03:03.694278 waagent[2116]: 2026-04-17T01:03:03.694244Z INFO ExtHandler Fetch goal state completed Apr 17 01:03:03.705858 waagent[2116]: 2026-04-17T01:03:03.705468Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Apr 17 01:03:03.708698 waagent[2116]: 2026-04-17T01:03:03.708658Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2116 Apr 17 01:03:03.708901 waagent[2116]: 2026-04-17T01:03:03.708873Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 17 01:03:03.709248 waagent[2116]: 2026-04-17T01:03:03.709220Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Apr 17 01:03:03.710428 waagent[2116]: 2026-04-17T01:03:03.710393Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Apr 17 01:03:03.710816 waagent[2116]: 2026-04-17T01:03:03.710786Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Apr 17 01:03:03.711002 waagent[2116]: 2026-04-17T01:03:03.710975Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Apr 17 01:03:03.711569 waagent[2116]: 2026-04-17T01:03:03.711506Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 17 01:03:03.763518 waagent[2116]: 2026-04-17T01:03:03.763487Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 17 01:03:03.763788 waagent[2116]: 2026-04-17T01:03:03.763757Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 17 01:03:03.768243 waagent[2116]: 2026-04-17T01:03:03.768214Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 17 01:03:03.772648 systemd[1]: Reload requested from client PID 2131 ('systemctl') (unit waagent.service)... Apr 17 01:03:03.772843 systemd[1]: Reloading... Apr 17 01:03:03.837129 zram_generator::config[2170]: No configuration found. Apr 17 01:03:03.988346 systemd[1]: Reloading finished in 215 ms. Apr 17 01:03:04.001128 waagent[2116]: 2026-04-17T01:03:04.000845Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 17 01:03:04.001128 waagent[2116]: 2026-04-17T01:03:04.000996Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 17 01:03:04.789310 waagent[2116]: 2026-04-17T01:03:04.789227Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 17 01:03:04.789607 waagent[2116]: 2026-04-17T01:03:04.789548Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Apr 17 01:03:04.790242 waagent[2116]: 2026-04-17T01:03:04.790197Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 17 01:03:04.790574 waagent[2116]: 2026-04-17T01:03:04.790497Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 17 01:03:04.791372 waagent[2116]: 2026-04-17T01:03:04.790747Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 01:03:04.791372 waagent[2116]: 2026-04-17T01:03:04.790818Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 01:03:04.791372 waagent[2116]: 2026-04-17T01:03:04.790977Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 17 01:03:04.791372 waagent[2116]: 2026-04-17T01:03:04.791125Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 17 01:03:04.791372 waagent[2116]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 17 01:03:04.791372 waagent[2116]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 17 01:03:04.791372 waagent[2116]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 17 01:03:04.791372 waagent[2116]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 17 01:03:04.791372 waagent[2116]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 01:03:04.791372 waagent[2116]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 01:03:04.791649 waagent[2116]: 2026-04-17T01:03:04.791611Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 17 01:03:04.791698 waagent[2116]: 2026-04-17T01:03:04.791657Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 17 01:03:04.792046 waagent[2116]: 2026-04-17T01:03:04.792012Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 17 01:03:04.792210 waagent[2116]: 2026-04-17T01:03:04.792171Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 17 01:03:04.792657 waagent[2116]: 2026-04-17T01:03:04.792627Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 01:03:04.792766 waagent[2116]: 2026-04-17T01:03:04.792746Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 17 01:03:04.792855 waagent[2116]: 2026-04-17T01:03:04.792838Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 01:03:04.793343 waagent[2116]: 2026-04-17T01:03:04.793313Z INFO EnvHandler ExtHandler Configure routes Apr 17 01:03:04.793516 waagent[2116]: 2026-04-17T01:03:04.793498Z INFO EnvHandler ExtHandler Gateway:None Apr 17 01:03:04.795572 waagent[2116]: 2026-04-17T01:03:04.795535Z INFO EnvHandler ExtHandler Routes:None Apr 17 01:03:04.798558 waagent[2116]: 2026-04-17T01:03:04.798521Z INFO ExtHandler ExtHandler Apr 17 01:03:04.798620 waagent[2116]: 2026-04-17T01:03:04.798586Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bea42f4d-58ef-453f-889d-94ca7823575c correlation 2d54f349-676e-4326-9b46-cb6a700cdce6 created: 2026-04-17T01:02:05.666534Z] Apr 17 01:03:04.798861 waagent[2116]: 2026-04-17T01:03:04.798830Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 17 01:03:04.799279 waagent[2116]: 2026-04-17T01:03:04.799250Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Apr 17 01:03:04.822713 waagent[2116]: 2026-04-17T01:03:04.822663Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Apr 17 01:03:04.822713 waagent[2116]: Try `iptables -h' or 'iptables --help' for more information.) Apr 17 01:03:04.823014 waagent[2116]: 2026-04-17T01:03:04.822983Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B352155A-BD07-4026-AD15-3B51BC3D610A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Apr 17 01:03:04.853128 waagent[2116]: 2026-04-17T01:03:04.852998Z INFO MonitorHandler ExtHandler Network interfaces: Apr 17 01:03:04.853128 waagent[2116]: Executing ['ip', '-a', '-o', 'link']: Apr 17 01:03:04.853128 waagent[2116]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 17 01:03:04.853128 waagent[2116]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:79:b4:a5 brd ff:ff:ff:ff:ff:ff Apr 17 01:03:04.853128 waagent[2116]: 3: enP62623s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:79:b4:a5 brd ff:ff:ff:ff:ff:ff\ altname enP62623p0s2 Apr 17 01:03:04.853128 waagent[2116]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 17 01:03:04.853128 waagent[2116]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 17 01:03:04.853128 waagent[2116]: 2: eth0 inet 10.0.0.24/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 17 01:03:04.853128 waagent[2116]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 17 01:03:04.853128 waagent[2116]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 17 01:03:04.853128 waagent[2116]: 2: eth0 inet6 fe80::7eed:8dff:fe79:b4a5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 17 01:03:04.894663 waagent[2116]: 2026-04-17T01:03:04.894614Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Apr 17 01:03:04.894663 waagent[2116]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 01:03:04.894663 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.894663 waagent[2116]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 01:03:04.894663 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.894663 waagent[2116]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Apr 17 01:03:04.894663 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.894663 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 01:03:04.894663 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 01:03:04.894663 waagent[2116]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 01:03:04.897945 waagent[2116]: 2026-04-17T01:03:04.897912Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 17 01:03:04.897945 waagent[2116]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 01:03:04.897945 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.897945 waagent[2116]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 01:03:04.897945 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.897945 waagent[2116]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Apr 17 01:03:04.897945 waagent[2116]: pkts bytes target prot opt in out source destination Apr 17 01:03:04.897945 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 01:03:04.897945 waagent[2116]: 6 520 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 01:03:04.897945 waagent[2116]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 01:03:04.898388 waagent[2116]: 2026-04-17T01:03:04.898366Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 17 01:03:10.680767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 01:03:10.682468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:10.786182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:10.789042 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:10.882271 kubelet[2265]: E0417 01:03:10.882198 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:10.885199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:10.885313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:10.885591 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107.4M memory peak. Apr 17 01:03:20.930886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 01:03:20.932640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:21.277911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:21.286409 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:21.312609 kubelet[2280]: E0417 01:03:21.312577 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:21.314899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:21.315003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:21.315417 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.3M memory peak. Apr 17 01:03:22.841405 chronyd[1847]: Selected source PHC0 Apr 17 01:03:24.626000 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 01:03:24.627731 systemd[1]: Started sshd@0-10.0.0.24:22-20.229.252.112:34182.service - OpenSSH per-connection server daemon (20.229.252.112:34182). Apr 17 01:03:25.570194 sshd[2288]: Accepted publickey for core from 20.229.252.112 port 34182 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:25.571248 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:25.574736 systemd-logind[1875]: New session 3 of user core. Apr 17 01:03:25.584211 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 01:03:26.167260 systemd[1]: Started sshd@1-10.0.0.24:22-20.229.252.112:33600.service - OpenSSH per-connection server daemon (20.229.252.112:33600). Apr 17 01:03:26.932589 sshd[2294]: Accepted publickey for core from 20.229.252.112 port 33600 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:26.933552 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:26.937108 systemd-logind[1875]: New session 4 of user core. Apr 17 01:03:26.951393 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 01:03:27.377532 sshd[2297]: Connection closed by 20.229.252.112 port 33600 Apr 17 01:03:27.378163 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Apr 17 01:03:27.381222 systemd[1]: sshd@1-10.0.0.24:22-20.229.252.112:33600.service: Deactivated successfully. Apr 17 01:03:27.382508 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 01:03:27.383068 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Apr 17 01:03:27.384035 systemd-logind[1875]: Removed session 4. Apr 17 01:03:27.534342 systemd[1]: Started sshd@2-10.0.0.24:22-20.229.252.112:33616.service - OpenSSH per-connection server daemon (20.229.252.112:33616). Apr 17 01:03:28.302623 sshd[2303]: Accepted publickey for core from 20.229.252.112 port 33616 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:28.303638 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:28.306951 systemd-logind[1875]: New session 5 of user core. Apr 17 01:03:28.314210 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 01:03:28.744347 sshd[2306]: Connection closed by 20.229.252.112 port 33616 Apr 17 01:03:28.744827 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Apr 17 01:03:28.747720 systemd[1]: sshd@2-10.0.0.24:22-20.229.252.112:33616.service: Deactivated successfully. Apr 17 01:03:28.749051 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 01:03:28.750501 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Apr 17 01:03:28.751558 systemd-logind[1875]: Removed session 5. Apr 17 01:03:28.903107 systemd[1]: Started sshd@3-10.0.0.24:22-20.229.252.112:33618.service - OpenSSH per-connection server daemon (20.229.252.112:33618). Apr 17 01:03:29.675960 sshd[2312]: Accepted publickey for core from 20.229.252.112 port 33618 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:29.676668 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:29.679911 systemd-logind[1875]: New session 6 of user core. Apr 17 01:03:29.691361 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 01:03:30.124178 sshd[2315]: Connection closed by 20.229.252.112 port 33618 Apr 17 01:03:30.124663 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Apr 17 01:03:30.127574 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Apr 17 01:03:30.127886 systemd[1]: sshd@3-10.0.0.24:22-20.229.252.112:33618.service: Deactivated successfully. Apr 17 01:03:30.129318 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 01:03:30.130711 systemd-logind[1875]: Removed session 6. Apr 17 01:03:30.279438 systemd[1]: Started sshd@4-10.0.0.24:22-20.229.252.112:33620.service - OpenSSH per-connection server daemon (20.229.252.112:33620). Apr 17 01:03:31.047585 sshd[2321]: Accepted publickey for core from 20.229.252.112 port 33620 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:31.048607 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:31.051913 systemd-logind[1875]: New session 7 of user core. Apr 17 01:03:31.060256 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 01:03:31.430625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 01:03:31.433602 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 01:03:31.433752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:31.433803 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 01:03:31.463463 sudo[2325]: pam_unix(sudo:session): session closed for user root Apr 17 01:03:31.611634 sshd[2324]: Connection closed by 20.229.252.112 port 33620 Apr 17 01:03:31.611518 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Apr 17 01:03:31.615313 systemd[1]: sshd@4-10.0.0.24:22-20.229.252.112:33620.service: Deactivated successfully. Apr 17 01:03:31.616860 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 01:03:31.617488 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Apr 17 01:03:31.618670 systemd-logind[1875]: Removed session 7. Apr 17 01:03:31.767683 systemd[1]: Started sshd@5-10.0.0.24:22-20.229.252.112:33626.service - OpenSSH per-connection server daemon (20.229.252.112:33626). Apr 17 01:03:31.785228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:31.787722 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:31.813464 kubelet[2341]: E0417 01:03:31.813427 2341 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:31.815253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:31.815353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:31.815637 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.1M memory peak. Apr 17 01:03:32.541174 sshd[2334]: Accepted publickey for core from 20.229.252.112 port 33626 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:32.542569 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:32.546640 systemd-logind[1875]: New session 8 of user core. Apr 17 01:03:32.552212 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 01:03:32.840276 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 01:03:32.840497 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 01:03:32.843197 sudo[2351]: pam_unix(sudo:session): session closed for user root Apr 17 01:03:32.846427 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 01:03:32.846824 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 01:03:32.853876 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 01:03:32.877474 augenrules[2373]: No rules Apr 17 01:03:32.878549 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 01:03:32.878817 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 01:03:32.880430 sudo[2350]: pam_unix(sudo:session): session closed for user root Apr 17 01:03:33.028763 sshd[2349]: Connection closed by 20.229.252.112 port 33626 Apr 17 01:03:33.028020 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Apr 17 01:03:33.030798 systemd[1]: sshd@5-10.0.0.24:22-20.229.252.112:33626.service: Deactivated successfully. Apr 17 01:03:33.032259 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 01:03:33.032888 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Apr 17 01:03:33.033907 systemd-logind[1875]: Removed session 8. Apr 17 01:03:33.185193 systemd[1]: Started sshd@6-10.0.0.24:22-20.229.252.112:33638.service - OpenSSH per-connection server daemon (20.229.252.112:33638). Apr 17 01:03:33.954586 sshd[2382]: Accepted publickey for core from 20.229.252.112 port 33638 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:03:33.955262 sshd-session[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:03:33.958484 systemd-logind[1875]: New session 9 of user core. Apr 17 01:03:33.967359 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 01:03:34.252705 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 01:03:34.253248 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 01:03:35.733298 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 01:03:35.740429 (dockerd)[2404]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 01:03:36.643009 dockerd[2404]: time="2026-04-17T01:03:36.642719747Z" level=info msg="Starting up" Apr 17 01:03:36.645455 dockerd[2404]: time="2026-04-17T01:03:36.643698699Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 01:03:36.654580 dockerd[2404]: time="2026-04-17T01:03:36.654499834Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 01:03:36.725335 dockerd[2404]: time="2026-04-17T01:03:36.725312254Z" level=info msg="Loading containers: start." Apr 17 01:03:36.737115 kernel: Initializing XFRM netlink socket Apr 17 01:03:37.018353 systemd-networkd[1484]: docker0: Link UP Apr 17 01:03:37.036417 dockerd[2404]: time="2026-04-17T01:03:37.036349321Z" level=info msg="Loading containers: done." Apr 17 01:03:37.048434 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck45366783-merged.mount: Deactivated successfully. Apr 17 01:03:37.057391 dockerd[2404]: time="2026-04-17T01:03:37.057361383Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 01:03:37.057468 dockerd[2404]: time="2026-04-17T01:03:37.057418240Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 01:03:37.057499 dockerd[2404]: time="2026-04-17T01:03:37.057484298Z" level=info msg="Initializing buildkit" Apr 17 01:03:37.112705 dockerd[2404]: time="2026-04-17T01:03:37.112678095Z" level=info msg="Completed buildkit initialization" Apr 17 01:03:37.118434 dockerd[2404]: time="2026-04-17T01:03:37.118403695Z" level=info msg="Daemon has completed initialization" Apr 17 01:03:37.119168 dockerd[2404]: time="2026-04-17T01:03:37.118482169Z" level=info msg="API listen on /run/docker.sock" Apr 17 01:03:37.119340 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 01:03:37.435015 containerd[1896]: time="2026-04-17T01:03:37.434982396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 01:03:38.488059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921223737.mount: Deactivated successfully. Apr 17 01:03:40.222796 containerd[1896]: time="2026-04-17T01:03:40.222200265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:40.225068 containerd[1896]: time="2026-04-17T01:03:40.225047446Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 17 01:03:40.229399 containerd[1896]: time="2026-04-17T01:03:40.229379447Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:40.234316 containerd[1896]: time="2026-04-17T01:03:40.234280734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:40.234854 containerd[1896]: time="2026-04-17T01:03:40.234829043Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.799810831s" Apr 17 01:03:40.234945 containerd[1896]: time="2026-04-17T01:03:40.234932502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 17 01:03:40.235709 containerd[1896]: time="2026-04-17T01:03:40.235667775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 01:03:41.930635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 17 01:03:41.932392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:41.940864 containerd[1896]: time="2026-04-17T01:03:41.939628137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:41.945602 containerd[1896]: time="2026-04-17T01:03:41.945566465Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 17 01:03:41.949107 containerd[1896]: time="2026-04-17T01:03:41.948797864Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:41.956319 containerd[1896]: time="2026-04-17T01:03:41.956291725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:41.959274 containerd[1896]: time="2026-04-17T01:03:41.956978678Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.721212388s" Apr 17 01:03:41.959366 containerd[1896]: time="2026-04-17T01:03:41.959351871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 17 01:03:41.961473 containerd[1896]: time="2026-04-17T01:03:41.961455002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 01:03:42.022999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:42.026426 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:42.119026 kubelet[2682]: E0417 01:03:42.118976 2682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:42.121431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:42.121543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:42.122023 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.1M memory peak. Apr 17 01:03:42.275230 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 17 01:03:44.489652 update_engine[1880]: I20260417 01:03:44.489192 1880 update_attempter.cc:509] Updating boot flags... Apr 17 01:03:44.830428 containerd[1896]: time="2026-04-17T01:03:44.829722605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:44.834023 containerd[1896]: time="2026-04-17T01:03:44.834000964Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 17 01:03:44.838230 containerd[1896]: time="2026-04-17T01:03:44.838209770Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:44.842943 containerd[1896]: time="2026-04-17T01:03:44.842921157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:44.843471 containerd[1896]: time="2026-04-17T01:03:44.843446769Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 2.881892188s" Apr 17 01:03:44.843556 containerd[1896]: time="2026-04-17T01:03:44.843543964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 17 01:03:44.844084 containerd[1896]: time="2026-04-17T01:03:44.844053272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 01:03:45.914761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936728781.mount: Deactivated successfully. Apr 17 01:03:46.233275 containerd[1896]: time="2026-04-17T01:03:46.233235940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:46.240622 containerd[1896]: time="2026-04-17T01:03:46.240600230Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 17 01:03:46.246115 containerd[1896]: time="2026-04-17T01:03:46.246077131Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:46.250930 containerd[1896]: time="2026-04-17T01:03:46.250583344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:46.250930 containerd[1896]: time="2026-04-17T01:03:46.250828846Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.406665995s" Apr 17 01:03:46.250930 containerd[1896]: time="2026-04-17T01:03:46.250853175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 17 01:03:46.251436 containerd[1896]: time="2026-04-17T01:03:46.251396892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 01:03:47.078176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964770875.mount: Deactivated successfully. Apr 17 01:03:48.399135 containerd[1896]: time="2026-04-17T01:03:48.398618927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:48.402561 containerd[1896]: time="2026-04-17T01:03:48.402535717Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 17 01:03:48.405997 containerd[1896]: time="2026-04-17T01:03:48.405973852Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:48.413135 containerd[1896]: time="2026-04-17T01:03:48.412831282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:48.413351 containerd[1896]: time="2026-04-17T01:03:48.413329697Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.161799274s" Apr 17 01:03:48.413422 containerd[1896]: time="2026-04-17T01:03:48.413409972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 17 01:03:48.414051 containerd[1896]: time="2026-04-17T01:03:48.414023718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 01:03:49.127933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726592133.mount: Deactivated successfully. Apr 17 01:03:49.153532 containerd[1896]: time="2026-04-17T01:03:49.153492952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:03:49.158646 containerd[1896]: time="2026-04-17T01:03:49.158619322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 17 01:03:49.162235 containerd[1896]: time="2026-04-17T01:03:49.162211870Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:03:49.167220 containerd[1896]: time="2026-04-17T01:03:49.167195508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:03:49.167627 containerd[1896]: time="2026-04-17T01:03:49.167606840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 753.445638ms" Apr 17 01:03:49.167647 containerd[1896]: time="2026-04-17T01:03:49.167632617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 17 01:03:49.168190 containerd[1896]: time="2026-04-17T01:03:49.168170737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 01:03:49.903977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131819883.mount: Deactivated successfully. Apr 17 01:03:51.158128 containerd[1896]: time="2026-04-17T01:03:51.158066174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:51.161729 containerd[1896]: time="2026-04-17T01:03:51.161701619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 17 01:03:51.167807 containerd[1896]: time="2026-04-17T01:03:51.167764714Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:51.172825 containerd[1896]: time="2026-04-17T01:03:51.172551329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:03:51.173624 containerd[1896]: time="2026-04-17T01:03:51.173089658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.004894119s" Apr 17 01:03:51.173624 containerd[1896]: time="2026-04-17T01:03:51.173131619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 17 01:03:52.180776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 17 01:03:52.184251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:52.281229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:52.286379 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:03:52.388462 kubelet[2964]: E0417 01:03:52.388424 2964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:03:52.390696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:03:52.390801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:03:52.391049 systemd[1]: kubelet.service: Consumed 172ms CPU time, 105.1M memory peak. Apr 17 01:03:53.570260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:53.570374 systemd[1]: kubelet.service: Consumed 172ms CPU time, 105.1M memory peak. Apr 17 01:03:53.572521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:53.593481 systemd[1]: Reload requested from client PID 2978 ('systemctl') (unit session-9.scope)... Apr 17 01:03:53.593495 systemd[1]: Reloading... Apr 17 01:03:53.704112 zram_generator::config[3032]: No configuration found. Apr 17 01:03:53.848565 systemd[1]: Reloading finished in 254 ms. Apr 17 01:03:53.983461 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 01:03:53.983537 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 01:03:53.985133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:53.985184 systemd[1]: kubelet.service: Consumed 71ms CPU time, 94.9M memory peak. Apr 17 01:03:53.986879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:54.199031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:54.202603 (kubelet)[3091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 01:03:54.225774 kubelet[3091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:03:54.225774 kubelet[3091]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 01:03:54.225774 kubelet[3091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:03:54.226014 kubelet[3091]: I0417 01:03:54.225806 3091 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 01:03:54.994123 kubelet[3091]: I0417 01:03:54.993441 3091 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 01:03:54.994123 kubelet[3091]: I0417 01:03:54.993473 3091 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 01:03:54.994123 kubelet[3091]: I0417 01:03:54.993720 3091 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 01:03:55.012240 kubelet[3091]: E0417 01:03:55.012204 3091 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:03:55.012586 kubelet[3091]: I0417 01:03:55.012567 3091 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 01:03:55.017990 kubelet[3091]: I0417 01:03:55.017970 3091 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 01:03:55.020940 kubelet[3091]: I0417 01:03:55.020922 3091 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 01:03:55.021152 kubelet[3091]: I0417 01:03:55.021128 3091 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 01:03:55.021259 kubelet[3091]: I0417 01:03:55.021151 3091 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-25f3036c32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 01:03:55.021330 kubelet[3091]: I0417 01:03:55.021261 3091 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 01:03:55.021330 kubelet[3091]: I0417 01:03:55.021268 3091 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 01:03:55.021396 kubelet[3091]: I0417 01:03:55.021381 3091 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:03:55.023945 kubelet[3091]: I0417 01:03:55.023925 3091 kubelet.go:480] "Attempting to sync node with API server" Apr 17 01:03:55.023978 kubelet[3091]: I0417 01:03:55.023950 3091 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 01:03:55.024069 kubelet[3091]: I0417 01:03:55.024055 3091 kubelet.go:386] "Adding apiserver pod source" Apr 17 01:03:55.025448 kubelet[3091]: I0417 01:03:55.025430 3091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 01:03:55.027632 kubelet[3091]: E0417 01:03:55.027593 3091 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-25f3036c32&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:03:55.029083 kubelet[3091]: E0417 01:03:55.029056 3091 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:03:55.029163 kubelet[3091]: I0417 01:03:55.029144 3091 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 01:03:55.029503 kubelet[3091]: I0417 01:03:55.029485 3091 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 01:03:55.029548 kubelet[3091]: W0417 01:03:55.029536 3091 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 01:03:55.031521 kubelet[3091]: I0417 01:03:55.031306 3091 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 01:03:55.031521 kubelet[3091]: I0417 01:03:55.031340 3091 server.go:1289] "Started kubelet" Apr 17 01:03:55.033439 kubelet[3091]: I0417 01:03:55.033417 3091 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 01:03:55.035489 kubelet[3091]: I0417 01:03:55.035443 3091 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 01:03:55.036678 kubelet[3091]: I0417 01:03:55.035725 3091 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 01:03:55.037055 kubelet[3091]: I0417 01:03:55.037026 3091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 01:03:55.040156 kubelet[3091]: E0417 01:03:55.039190 3091 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-25f3036c32.18a6ff53025def30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-25f3036c32,UID:ci-4459.2.4-n-25f3036c32,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-25f3036c32,},FirstTimestamp:2026-04-17 01:03:55.03131832 +0000 UTC m=+0.825753563,LastTimestamp:2026-04-17 01:03:55.03131832 +0000 UTC m=+0.825753563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-25f3036c32,}" Apr 17 01:03:55.041886 kubelet[3091]: I0417 01:03:55.041869 3091 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 01:03:55.042003 kubelet[3091]: I0417 01:03:55.041902 3091 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 01:03:55.042304 kubelet[3091]: I0417 01:03:55.041920 3091 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 01:03:55.043630 kubelet[3091]: E0417 01:03:55.042015 3091 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-25f3036c32\" not found" Apr 17 01:03:55.043630 kubelet[3091]: I0417 01:03:55.042419 3091 reconciler.go:26] "Reconciler: start to sync state" Apr 17 01:03:55.043630 kubelet[3091]: I0417 01:03:55.042528 3091 server.go:317] "Adding debug handlers to kubelet server" Apr 17 01:03:55.043630 kubelet[3091]: E0417 01:03:55.043355 3091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-25f3036c32?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Apr 17 01:03:55.043630 kubelet[3091]: E0417 01:03:55.043603 3091 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:03:55.044506 kubelet[3091]: I0417 01:03:55.044489 3091 factory.go:223] Registration of the systemd container factory successfully Apr 17 01:03:55.044633 kubelet[3091]: I0417 01:03:55.044619 3091 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 01:03:55.045708 kubelet[3091]: I0417 01:03:55.045693 3091 factory.go:223] Registration of the containerd container factory successfully Apr 17 01:03:55.058609 kubelet[3091]: E0417 01:03:55.058595 3091 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 01:03:55.060830 kubelet[3091]: I0417 01:03:55.060806 3091 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 01:03:55.060830 kubelet[3091]: I0417 01:03:55.060820 3091 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 01:03:55.060830 kubelet[3091]: I0417 01:03:55.060835 3091 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:03:55.071129 kubelet[3091]: I0417 01:03:55.070796 3091 policy_none.go:49] "None policy: Start" Apr 17 01:03:55.071129 kubelet[3091]: I0417 01:03:55.070825 3091 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 01:03:55.071129 kubelet[3091]: I0417 01:03:55.070835 3091 state_mem.go:35] "Initializing new in-memory state store" Apr 17 01:03:55.071778 kubelet[3091]: I0417 01:03:55.071745 3091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 01:03:55.072592 kubelet[3091]: I0417 01:03:55.072563 3091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 01:03:55.072592 kubelet[3091]: I0417 01:03:55.072588 3091 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 01:03:55.072663 kubelet[3091]: I0417 01:03:55.072603 3091 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 01:03:55.072663 kubelet[3091]: I0417 01:03:55.072608 3091 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 01:03:55.072663 kubelet[3091]: E0417 01:03:55.072637 3091 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:03:55.075507 kubelet[3091]: E0417 01:03:55.075435 3091 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:03:55.081520 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 01:03:55.091564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 01:03:55.094187 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 01:03:55.100735 kubelet[3091]: E0417 01:03:55.100708 3091 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 01:03:55.100872 kubelet[3091]: I0417 01:03:55.100853 3091 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 01:03:55.100912 kubelet[3091]: I0417 01:03:55.100869 3091 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 01:03:55.101218 kubelet[3091]: I0417 01:03:55.101043 3091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 01:03:55.102514 kubelet[3091]: E0417 01:03:55.102492 3091 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 01:03:55.102566 kubelet[3091]: E0417 01:03:55.102524 3091 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-25f3036c32\" not found" Apr 17 01:03:55.185757 systemd[1]: Created slice kubepods-burstable-pod1f93fbe58b3ca8e55794671ed5a51106.slice - libcontainer container kubepods-burstable-pod1f93fbe58b3ca8e55794671ed5a51106.slice. Apr 17 01:03:55.190670 kubelet[3091]: E0417 01:03:55.190607 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.193553 systemd[1]: Created slice kubepods-burstable-podce4968001e4040bfedff38f35364dfe1.slice - libcontainer container kubepods-burstable-podce4968001e4040bfedff38f35364dfe1.slice. Apr 17 01:03:55.195272 kubelet[3091]: E0417 01:03:55.195250 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.202868 kubelet[3091]: I0417 01:03:55.202803 3091 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.203284 kubelet[3091]: E0417 01:03:55.203255 3091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.206582 systemd[1]: Created slice kubepods-burstable-podcbc6f9a11811fbe2d58da51714789ffa.slice - libcontainer container kubepods-burstable-podcbc6f9a11811fbe2d58da51714789ffa.slice. Apr 17 01:03:55.208129 kubelet[3091]: E0417 01:03:55.208085 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.243688 kubelet[3091]: I0417 01:03:55.243572 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.244911 kubelet[3091]: I0417 01:03:55.243964 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.244911 kubelet[3091]: I0417 01:03:55.243985 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.244911 kubelet[3091]: E0417 01:03:55.243661 3091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-25f3036c32?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Apr 17 01:03:55.244911 kubelet[3091]: I0417 01:03:55.244030 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.244911 kubelet[3091]: I0417 01:03:55.244041 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.245034 kubelet[3091]: I0417 01:03:55.244050 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.245034 kubelet[3091]: I0417 01:03:55.244059 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.245034 kubelet[3091]: I0417 01:03:55.244069 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.245034 kubelet[3091]: I0417 01:03:55.244130 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbc6f9a11811fbe2d58da51714789ffa-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" (UID: \"cbc6f9a11811fbe2d58da51714789ffa\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.404863 kubelet[3091]: I0417 01:03:55.404837 3091 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.405152 kubelet[3091]: E0417 01:03:55.405129 3091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.495693 containerd[1896]: time="2026-04-17T01:03:55.495146325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-25f3036c32,Uid:1f93fbe58b3ca8e55794671ed5a51106,Namespace:kube-system,Attempt:0,}" Apr 17 01:03:55.496603 containerd[1896]: time="2026-04-17T01:03:55.496577229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-25f3036c32,Uid:ce4968001e4040bfedff38f35364dfe1,Namespace:kube-system,Attempt:0,}" Apr 17 01:03:55.509020 containerd[1896]: time="2026-04-17T01:03:55.508991132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-25f3036c32,Uid:cbc6f9a11811fbe2d58da51714789ffa,Namespace:kube-system,Attempt:0,}" Apr 17 01:03:55.607908 containerd[1896]: time="2026-04-17T01:03:55.607861637Z" level=info msg="connecting to shim a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574" address="unix:///run/containerd/s/67cec1aaa888485af667d7042dd1064124e92946e992449acfc29e1fcc4f28e3" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:03:55.632354 systemd[1]: Started cri-containerd-a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574.scope - libcontainer container a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574. Apr 17 01:03:55.634633 containerd[1896]: time="2026-04-17T01:03:55.634603041Z" level=info msg="connecting to shim 14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5" address="unix:///run/containerd/s/18d902a4c021de0050d42cfe59be139ce713e1abefaa569437341c07886b1c41" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:03:55.635251 containerd[1896]: time="2026-04-17T01:03:55.635082766Z" level=info msg="connecting to shim d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378" address="unix:///run/containerd/s/c13611a14b9fbab4f210f42f8e8423798f26e6de17caa2afbc7a7851ebf6b131" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:03:55.645549 kubelet[3091]: E0417 01:03:55.645240 3091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-25f3036c32?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Apr 17 01:03:55.661273 systemd[1]: Started cri-containerd-14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5.scope - libcontainer container 14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5. Apr 17 01:03:55.663940 systemd[1]: Started cri-containerd-d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378.scope - libcontainer container d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378. Apr 17 01:03:55.689786 containerd[1896]: time="2026-04-17T01:03:55.689751655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-25f3036c32,Uid:1f93fbe58b3ca8e55794671ed5a51106,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574\"" Apr 17 01:03:55.699901 containerd[1896]: time="2026-04-17T01:03:55.699874117Z" level=info msg="CreateContainer within sandbox \"a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 01:03:55.710996 containerd[1896]: time="2026-04-17T01:03:55.710971814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-25f3036c32,Uid:ce4968001e4040bfedff38f35364dfe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378\"" Apr 17 01:03:55.719918 containerd[1896]: time="2026-04-17T01:03:55.719881770Z" level=info msg="CreateContainer within sandbox \"d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 01:03:55.725254 containerd[1896]: time="2026-04-17T01:03:55.725213584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-25f3036c32,Uid:cbc6f9a11811fbe2d58da51714789ffa,Namespace:kube-system,Attempt:0,} returns sandbox id \"14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5\"" Apr 17 01:03:55.732948 containerd[1896]: time="2026-04-17T01:03:55.732919762Z" level=info msg="CreateContainer within sandbox \"14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 01:03:55.736332 containerd[1896]: time="2026-04-17T01:03:55.735611766Z" level=info msg="Container 7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:03:55.764670 containerd[1896]: time="2026-04-17T01:03:55.764593649Z" level=info msg="Container 87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:03:55.772441 containerd[1896]: time="2026-04-17T01:03:55.772415294Z" level=info msg="Container dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:03:55.782353 containerd[1896]: time="2026-04-17T01:03:55.782323293Z" level=info msg="CreateContainer within sandbox \"a4923a58dcf13e01afaeee45a9f683e75cba7ae5387e89d568df90cb1d202574\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81\"" Apr 17 01:03:55.783993 containerd[1896]: time="2026-04-17T01:03:55.782909174Z" level=info msg="StartContainer for \"7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81\"" Apr 17 01:03:55.783993 containerd[1896]: time="2026-04-17T01:03:55.783642027Z" level=info msg="connecting to shim 7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81" address="unix:///run/containerd/s/67cec1aaa888485af667d7042dd1064124e92946e992449acfc29e1fcc4f28e3" protocol=ttrpc version=3 Apr 17 01:03:55.799267 containerd[1896]: time="2026-04-17T01:03:55.799243067Z" level=info msg="CreateContainer within sandbox \"d05c452025c9449c8134a67221141b4e19f10d75159a8e672d1b006bab151378\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e\"" Apr 17 01:03:55.799714 containerd[1896]: time="2026-04-17T01:03:55.799695000Z" level=info msg="StartContainer for \"87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e\"" Apr 17 01:03:55.800215 systemd[1]: Started cri-containerd-7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81.scope - libcontainer container 7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81. Apr 17 01:03:55.802926 containerd[1896]: time="2026-04-17T01:03:55.802842849Z" level=info msg="connecting to shim 87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e" address="unix:///run/containerd/s/c13611a14b9fbab4f210f42f8e8423798f26e6de17caa2afbc7a7851ebf6b131" protocol=ttrpc version=3 Apr 17 01:03:55.806383 containerd[1896]: time="2026-04-17T01:03:55.806361012Z" level=info msg="CreateContainer within sandbox \"14f27351dfc8e7ec66425402ad78c46cd6d4d193430aff1804918d66e223c0f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb\"" Apr 17 01:03:55.808057 kubelet[3091]: I0417 01:03:55.807296 3091 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.808148 containerd[1896]: time="2026-04-17T01:03:55.807974170Z" level=info msg="StartContainer for \"dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb\"" Apr 17 01:03:55.808530 kubelet[3091]: E0417 01:03:55.808501 3091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:55.813666 containerd[1896]: time="2026-04-17T01:03:55.813636202Z" level=info msg="connecting to shim dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb" address="unix:///run/containerd/s/18d902a4c021de0050d42cfe59be139ce713e1abefaa569437341c07886b1c41" protocol=ttrpc version=3 Apr 17 01:03:55.829978 systemd[1]: Started cri-containerd-87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e.scope - libcontainer container 87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e. Apr 17 01:03:55.832848 systemd[1]: Started cri-containerd-dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb.scope - libcontainer container dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb. Apr 17 01:03:55.869132 containerd[1896]: time="2026-04-17T01:03:55.867456314Z" level=info msg="StartContainer for \"7478d06b9f13cd58d63fd22349c8803126a2e35e6f996ce286a31f65b47a5b81\" returns successfully" Apr 17 01:03:55.893223 containerd[1896]: time="2026-04-17T01:03:55.893144304Z" level=info msg="StartContainer for \"dce8091b95324250b2b5d25e96066e3188b22ed9b2d7269f84696f04525c0bdb\" returns successfully" Apr 17 01:03:55.894231 containerd[1896]: time="2026-04-17T01:03:55.894170973Z" level=info msg="StartContainer for \"87b10f9811501153ca6e8e2b47c85b10f1b1c63e66f4b67c608531bcdc26a00e\" returns successfully" Apr 17 01:03:56.083580 kubelet[3091]: E0417 01:03:56.083497 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.086938 kubelet[3091]: E0417 01:03:56.086787 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.086938 kubelet[3091]: E0417 01:03:56.086837 3091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.610822 kubelet[3091]: I0417 01:03:56.610796 3091 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.740871 kubelet[3091]: E0417 01:03:56.740835 3091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-25f3036c32\" not found" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.794388 kubelet[3091]: I0417 01:03:56.794323 3091 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.842994 kubelet[3091]: I0417 01:03:56.842960 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.850048 kubelet[3091]: E0417 01:03:56.849974 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.850048 kubelet[3091]: I0417 01:03:56.850002 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.851551 kubelet[3091]: E0417 01:03:56.851532 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.851551 kubelet[3091]: I0417 01:03:56.851569 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:56.852625 kubelet[3091]: E0417 01:03:56.852591 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.028420 kubelet[3091]: I0417 01:03:57.028255 3091 apiserver.go:52] "Watching apiserver" Apr 17 01:03:57.042432 kubelet[3091]: I0417 01:03:57.042403 3091 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 01:03:57.086582 kubelet[3091]: I0417 01:03:57.086559 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.087285 kubelet[3091]: I0417 01:03:57.087006 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.087443 kubelet[3091]: I0417 01:03:57.087124 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.093651 kubelet[3091]: E0417 01:03:57.093508 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.093884 kubelet[3091]: E0417 01:03:57.093860 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:57.093954 kubelet[3091]: E0417 01:03:57.093867 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:58.088603 kubelet[3091]: I0417 01:03:58.088506 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:58.090003 kubelet[3091]: I0417 01:03:58.089192 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:58.097027 kubelet[3091]: I0417 01:03:58.096962 3091 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:03:58.102284 kubelet[3091]: I0417 01:03:58.102264 3091 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:03:59.262724 systemd[1]: Reload requested from client PID 3366 ('systemctl') (unit session-9.scope)... Apr 17 01:03:59.263000 systemd[1]: Reloading... Apr 17 01:03:59.340161 zram_generator::config[3414]: No configuration found. Apr 17 01:03:59.505704 systemd[1]: Reloading finished in 242 ms. Apr 17 01:03:59.522538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:59.535250 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 01:03:59.535451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:59.535507 systemd[1]: kubelet.service: Consumed 1.018s CPU time, 126.7M memory peak. Apr 17 01:03:59.536921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:03:59.636365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:03:59.642817 (kubelet)[3477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 01:03:59.672955 kubelet[3477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:03:59.673844 kubelet[3477]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 01:03:59.673844 kubelet[3477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:03:59.673844 kubelet[3477]: I0417 01:03:59.673290 3477 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 01:03:59.679183 kubelet[3477]: I0417 01:03:59.678536 3477 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 01:03:59.679183 kubelet[3477]: I0417 01:03:59.678564 3477 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 01:03:59.679183 kubelet[3477]: I0417 01:03:59.678779 3477 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 01:03:59.680006 kubelet[3477]: I0417 01:03:59.679988 3477 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 01:03:59.681634 kubelet[3477]: I0417 01:03:59.681565 3477 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 01:03:59.684598 kubelet[3477]: I0417 01:03:59.684579 3477 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 01:03:59.687266 kubelet[3477]: I0417 01:03:59.687243 3477 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 01:03:59.687437 kubelet[3477]: I0417 01:03:59.687414 3477 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 01:03:59.687553 kubelet[3477]: I0417 01:03:59.687436 3477 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-25f3036c32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 01:03:59.687618 kubelet[3477]: I0417 01:03:59.687555 3477 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 01:03:59.687618 kubelet[3477]: I0417 01:03:59.687562 3477 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 01:03:59.687618 kubelet[3477]: I0417 01:03:59.687603 3477 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:03:59.687737 kubelet[3477]: I0417 01:03:59.687722 3477 kubelet.go:480] "Attempting to sync node with API server" Apr 17 01:03:59.687737 kubelet[3477]: I0417 01:03:59.687734 3477 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 01:03:59.687773 kubelet[3477]: I0417 01:03:59.687764 3477 kubelet.go:386] "Adding apiserver pod source" Apr 17 01:03:59.687952 kubelet[3477]: I0417 01:03:59.687936 3477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 01:03:59.692120 kubelet[3477]: I0417 01:03:59.691318 3477 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 01:03:59.692120 kubelet[3477]: I0417 01:03:59.691676 3477 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 01:03:59.696308 kubelet[3477]: I0417 01:03:59.696293 3477 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 01:03:59.696401 kubelet[3477]: I0417 01:03:59.696393 3477 server.go:1289] "Started kubelet" Apr 17 01:03:59.701508 kubelet[3477]: I0417 01:03:59.701491 3477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 01:03:59.702004 kubelet[3477]: I0417 01:03:59.701978 3477 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 01:03:59.703374 kubelet[3477]: I0417 01:03:59.703334 3477 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 01:03:59.703615 kubelet[3477]: I0417 01:03:59.703592 3477 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 01:03:59.704841 kubelet[3477]: I0417 01:03:59.704257 3477 server.go:317] "Adding debug handlers to kubelet server" Apr 17 01:03:59.705646 kubelet[3477]: I0417 01:03:59.705619 3477 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 01:03:59.707148 kubelet[3477]: I0417 01:03:59.707118 3477 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 01:03:59.707300 kubelet[3477]: E0417 01:03:59.707275 3477 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-25f3036c32\" not found" Apr 17 01:03:59.709371 kubelet[3477]: I0417 01:03:59.709232 3477 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 01:03:59.709371 kubelet[3477]: I0417 01:03:59.709366 3477 reconciler.go:26] "Reconciler: start to sync state" Apr 17 01:03:59.711506 kubelet[3477]: I0417 01:03:59.710679 3477 factory.go:223] Registration of the systemd container factory successfully Apr 17 01:03:59.711506 kubelet[3477]: I0417 01:03:59.710744 3477 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 01:03:59.715428 kubelet[3477]: I0417 01:03:59.715248 3477 factory.go:223] Registration of the containerd container factory successfully Apr 17 01:03:59.716816 kubelet[3477]: E0417 01:03:59.716794 3477 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 01:03:59.719679 kubelet[3477]: I0417 01:03:59.719383 3477 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 01:03:59.720167 kubelet[3477]: I0417 01:03:59.720140 3477 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 01:03:59.720167 kubelet[3477]: I0417 01:03:59.720162 3477 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 01:03:59.720676 kubelet[3477]: I0417 01:03:59.720176 3477 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 01:03:59.720676 kubelet[3477]: I0417 01:03:59.720181 3477 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 01:03:59.720676 kubelet[3477]: E0417 01:03:59.720212 3477 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:03:59.772181 kubelet[3477]: I0417 01:03:59.772158 3477 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 01:03:59.772424 kubelet[3477]: I0417 01:03:59.772410 3477 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 01:03:59.772508 kubelet[3477]: I0417 01:03:59.772500 3477 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772685 3477 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772696 3477 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772710 3477 policy_none.go:49] "None policy: Start" Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772720 3477 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772728 3477 state_mem.go:35] "Initializing new in-memory state store" Apr 17 01:03:59.773064 kubelet[3477]: I0417 01:03:59.772791 3477 state_mem.go:75] "Updated machine memory state" Apr 17 01:03:59.779133 kubelet[3477]: E0417 01:03:59.778397 3477 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 01:03:59.779761 kubelet[3477]: I0417 01:03:59.779691 3477 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 01:03:59.780224 kubelet[3477]: I0417 01:03:59.780121 3477 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 01:03:59.781965 kubelet[3477]: I0417 01:03:59.781854 3477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 01:03:59.785431 kubelet[3477]: E0417 01:03:59.785406 3477 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 01:03:59.821478 kubelet[3477]: I0417 01:03:59.821444 3477 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.821709 kubelet[3477]: I0417 01:03:59.821492 3477 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.821834 kubelet[3477]: I0417 01:03:59.821545 3477 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.839334 kubelet[3477]: I0417 01:03:59.839304 3477 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:03:59.839527 kubelet[3477]: E0417 01:03:59.839359 3477 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.840342 kubelet[3477]: I0417 01:03:59.840012 3477 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:03:59.840414 kubelet[3477]: I0417 01:03:59.840348 3477 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:03:59.840414 kubelet[3477]: E0417 01:03:59.840400 3477 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.883495 kubelet[3477]: I0417 01:03:59.883453 3477 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.897818 kubelet[3477]: I0417 01:03:59.897770 3477 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.898109 kubelet[3477]: I0417 01:03:59.897995 3477 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910202 kubelet[3477]: I0417 01:03:59.910076 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910631 kubelet[3477]: I0417 01:03:59.910578 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910631 kubelet[3477]: I0417 01:03:59.910600 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910631 kubelet[3477]: I0417 01:03:59.910614 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910830 kubelet[3477]: I0417 01:03:59.910785 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f93fbe58b3ca8e55794671ed5a51106-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" (UID: \"1f93fbe58b3ca8e55794671ed5a51106\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910830 kubelet[3477]: I0417 01:03:59.910805 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.910938 kubelet[3477]: I0417 01:03:59.910927 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.911068 kubelet[3477]: I0417 01:03:59.911002 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4968001e4040bfedff38f35364dfe1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-25f3036c32\" (UID: \"ce4968001e4040bfedff38f35364dfe1\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" Apr 17 01:03:59.911068 kubelet[3477]: I0417 01:03:59.911017 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbc6f9a11811fbe2d58da51714789ffa-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" (UID: \"cbc6f9a11811fbe2d58da51714789ffa\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:04:00.689342 kubelet[3477]: I0417 01:04:00.689231 3477 apiserver.go:52] "Watching apiserver" Apr 17 01:04:00.710092 kubelet[3477]: I0417 01:04:00.710064 3477 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 01:04:00.757224 kubelet[3477]: I0417 01:04:00.757198 3477 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:04:00.757520 kubelet[3477]: I0417 01:04:00.757504 3477 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:04:00.771150 kubelet[3477]: I0417 01:04:00.771125 3477 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:04:00.771225 kubelet[3477]: E0417 01:04:00.771174 3477 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-25f3036c32\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" Apr 17 01:04:00.778724 kubelet[3477]: I0417 01:04:00.778704 3477 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 01:04:00.778804 kubelet[3477]: E0417 01:04:00.778740 3477 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-25f3036c32\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" Apr 17 01:04:00.779538 kubelet[3477]: I0417 01:04:00.779071 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-25f3036c32" podStartSLOduration=2.779048669 podStartE2EDuration="2.779048669s" podCreationTimestamp="2026-04-17 01:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:00.778911401 +0000 UTC m=+1.132626514" watchObservedRunningTime="2026-04-17 01:04:00.779048669 +0000 UTC m=+1.132763782" Apr 17 01:04:00.792640 kubelet[3477]: I0417 01:04:00.792600 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-25f3036c32" podStartSLOduration=1.792590388 podStartE2EDuration="1.792590388s" podCreationTimestamp="2026-04-17 01:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:00.791587047 +0000 UTC m=+1.145302168" watchObservedRunningTime="2026-04-17 01:04:00.792590388 +0000 UTC m=+1.146305509" Apr 17 01:04:05.655243 kubelet[3477]: I0417 01:04:05.655203 3477 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 01:04:05.656695 containerd[1896]: time="2026-04-17T01:04:05.656376884Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 01:04:05.657025 kubelet[3477]: I0417 01:04:05.656555 3477 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 01:04:06.530108 kubelet[3477]: I0417 01:04:06.530016 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-25f3036c32" podStartSLOduration=8.53000107 podStartE2EDuration="8.53000107s" podCreationTimestamp="2026-04-17 01:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:00.809643189 +0000 UTC m=+1.163358310" watchObservedRunningTime="2026-04-17 01:04:06.53000107 +0000 UTC m=+6.883716183" Apr 17 01:04:06.548013 systemd[1]: Created slice kubepods-besteffort-poddb0573ff_1daa_434b_b249_744836310681.slice - libcontainer container kubepods-besteffort-poddb0573ff_1daa_434b_b249_744836310681.slice. Apr 17 01:04:06.552239 kubelet[3477]: I0417 01:04:06.552192 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db0573ff-1daa-434b-b249-744836310681-kube-proxy\") pod \"kube-proxy-b9hj2\" (UID: \"db0573ff-1daa-434b-b249-744836310681\") " pod="kube-system/kube-proxy-b9hj2" Apr 17 01:04:06.552239 kubelet[3477]: I0417 01:04:06.552222 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db0573ff-1daa-434b-b249-744836310681-xtables-lock\") pod \"kube-proxy-b9hj2\" (UID: \"db0573ff-1daa-434b-b249-744836310681\") " pod="kube-system/kube-proxy-b9hj2" Apr 17 01:04:06.552439 kubelet[3477]: I0417 01:04:06.552393 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db0573ff-1daa-434b-b249-744836310681-lib-modules\") pod \"kube-proxy-b9hj2\" (UID: \"db0573ff-1daa-434b-b249-744836310681\") " pod="kube-system/kube-proxy-b9hj2" Apr 17 01:04:06.552439 kubelet[3477]: I0417 01:04:06.552414 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27j9\" (UniqueName: \"kubernetes.io/projected/db0573ff-1daa-434b-b249-744836310681-kube-api-access-c27j9\") pod \"kube-proxy-b9hj2\" (UID: \"db0573ff-1daa-434b-b249-744836310681\") " pod="kube-system/kube-proxy-b9hj2" Apr 17 01:04:06.663005 systemd[1]: Created slice kubepods-besteffort-pod9dbd82a8_a2f6_466e_917e_8bc8ab44868a.slice - libcontainer container kubepods-besteffort-pod9dbd82a8_a2f6_466e_917e_8bc8ab44868a.slice. Apr 17 01:04:06.753867 kubelet[3477]: I0417 01:04:06.753807 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgpk\" (UniqueName: \"kubernetes.io/projected/9dbd82a8-a2f6-466e-917e-8bc8ab44868a-kube-api-access-npgpk\") pod \"tigera-operator-6bf85f8dd-rjgtf\" (UID: \"9dbd82a8-a2f6-466e-917e-8bc8ab44868a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-rjgtf" Apr 17 01:04:06.754402 kubelet[3477]: I0417 01:04:06.753974 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9dbd82a8-a2f6-466e-917e-8bc8ab44868a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-rjgtf\" (UID: \"9dbd82a8-a2f6-466e-917e-8bc8ab44868a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-rjgtf" Apr 17 01:04:06.856383 containerd[1896]: time="2026-04-17T01:04:06.856251787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9hj2,Uid:db0573ff-1daa-434b-b249-744836310681,Namespace:kube-system,Attempt:0,}" Apr 17 01:04:06.907731 containerd[1896]: time="2026-04-17T01:04:06.907691565Z" level=info msg="connecting to shim cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93" address="unix:///run/containerd/s/22da77fe89b3ae841743dc3b6352f84baf9ae3d061b26635040c752c40f8b4c1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:06.930251 systemd[1]: Started cri-containerd-cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93.scope - libcontainer container cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93. Apr 17 01:04:06.954713 containerd[1896]: time="2026-04-17T01:04:06.954652441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9hj2,Uid:db0573ff-1daa-434b-b249-744836310681,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93\"" Apr 17 01:04:06.965591 containerd[1896]: time="2026-04-17T01:04:06.965567151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-rjgtf,Uid:9dbd82a8-a2f6-466e-917e-8bc8ab44868a,Namespace:tigera-operator,Attempt:0,}" Apr 17 01:04:06.966195 containerd[1896]: time="2026-04-17T01:04:06.966171910Z" level=info msg="CreateContainer within sandbox \"cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 01:04:07.014121 containerd[1896]: time="2026-04-17T01:04:07.013471571Z" level=info msg="Container 2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:07.059370 containerd[1896]: time="2026-04-17T01:04:07.059341843Z" level=info msg="CreateContainer within sandbox \"cf6ea50d31a1b33bfc8fd095e7e614725292f3c18578d8aeddb8596f77da5a93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd\"" Apr 17 01:04:07.060503 containerd[1896]: time="2026-04-17T01:04:07.060398759Z" level=info msg="StartContainer for \"2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd\"" Apr 17 01:04:07.061248 containerd[1896]: time="2026-04-17T01:04:07.061067792Z" level=info msg="connecting to shim df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e" address="unix:///run/containerd/s/edfeb9895734c18c5480aff977ae48ff622a382e8b4de320e339c12efa09e9f7" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:07.062944 containerd[1896]: time="2026-04-17T01:04:07.062916625Z" level=info msg="connecting to shim 2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd" address="unix:///run/containerd/s/22da77fe89b3ae841743dc3b6352f84baf9ae3d061b26635040c752c40f8b4c1" protocol=ttrpc version=3 Apr 17 01:04:07.085210 systemd[1]: Started cri-containerd-2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd.scope - libcontainer container 2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd. Apr 17 01:04:07.088029 systemd[1]: Started cri-containerd-df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e.scope - libcontainer container df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e. Apr 17 01:04:07.124002 containerd[1896]: time="2026-04-17T01:04:07.123840266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-rjgtf,Uid:9dbd82a8-a2f6-466e-917e-8bc8ab44868a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e\"" Apr 17 01:04:07.127856 containerd[1896]: time="2026-04-17T01:04:07.127384759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 01:04:07.144175 containerd[1896]: time="2026-04-17T01:04:07.144146621Z" level=info msg="StartContainer for \"2f08c12b2ebfd604c1ef925344c2ddd310e127027aaf0f284adbc6ca291e87cd\" returns successfully" Apr 17 01:04:07.686771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573207289.mount: Deactivated successfully. Apr 17 01:04:07.780246 kubelet[3477]: I0417 01:04:07.780186 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9hj2" podStartSLOduration=1.780174097 podStartE2EDuration="1.780174097s" podCreationTimestamp="2026-04-17 01:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:07.779399636 +0000 UTC m=+8.133114749" watchObservedRunningTime="2026-04-17 01:04:07.780174097 +0000 UTC m=+8.133889210" Apr 17 01:04:09.998026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580607428.mount: Deactivated successfully. Apr 17 01:04:11.089064 containerd[1896]: time="2026-04-17T01:04:11.088599439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:11.093272 containerd[1896]: time="2026-04-17T01:04:11.093248535Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 01:04:11.100038 containerd[1896]: time="2026-04-17T01:04:11.100015688Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:11.110136 containerd[1896]: time="2026-04-17T01:04:11.110083940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:11.110627 containerd[1896]: time="2026-04-17T01:04:11.110461774Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.983040374s" Apr 17 01:04:11.110627 containerd[1896]: time="2026-04-17T01:04:11.110568025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 01:04:11.119519 containerd[1896]: time="2026-04-17T01:04:11.119231358Z" level=info msg="CreateContainer within sandbox \"df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 01:04:11.146234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254273122.mount: Deactivated successfully. Apr 17 01:04:11.148650 containerd[1896]: time="2026-04-17T01:04:11.148482743Z" level=info msg="Container d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:11.163715 containerd[1896]: time="2026-04-17T01:04:11.163679742Z" level=info msg="CreateContainer within sandbox \"df2c3dca9bbbd89b4eff186ee51765ea60a2825104a1c6506ade47487bd7687e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824\"" Apr 17 01:04:11.164189 containerd[1896]: time="2026-04-17T01:04:11.164150307Z" level=info msg="StartContainer for \"d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824\"" Apr 17 01:04:11.164783 containerd[1896]: time="2026-04-17T01:04:11.164756812Z" level=info msg="connecting to shim d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824" address="unix:///run/containerd/s/edfeb9895734c18c5480aff977ae48ff622a382e8b4de320e339c12efa09e9f7" protocol=ttrpc version=3 Apr 17 01:04:11.181219 systemd[1]: Started cri-containerd-d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824.scope - libcontainer container d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824. Apr 17 01:04:11.211242 containerd[1896]: time="2026-04-17T01:04:11.211208947Z" level=info msg="StartContainer for \"d0d1a0c33a492b93da75cd5c6bac9d4e9f7e9cd7c892f91af133f80fe3816824\" returns successfully" Apr 17 01:04:12.713434 kubelet[3477]: I0417 01:04:12.713380 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-rjgtf" podStartSLOduration=2.72886621 podStartE2EDuration="6.713366095s" podCreationTimestamp="2026-04-17 01:04:06 +0000 UTC" firstStartedPulling="2026-04-17 01:04:07.12695854 +0000 UTC m=+7.480673653" lastFinishedPulling="2026-04-17 01:04:11.111458425 +0000 UTC m=+11.465173538" observedRunningTime="2026-04-17 01:04:11.786142604 +0000 UTC m=+12.139857717" watchObservedRunningTime="2026-04-17 01:04:12.713366095 +0000 UTC m=+13.067081208" Apr 17 01:04:16.400025 sudo[2386]: pam_unix(sudo:session): session closed for user root Apr 17 01:04:16.550595 sshd[2385]: Connection closed by 20.229.252.112 port 33638 Apr 17 01:04:16.550165 sshd-session[2382]: pam_unix(sshd:session): session closed for user core Apr 17 01:04:16.553598 systemd[1]: sshd@6-10.0.0.24:22-20.229.252.112:33638.service: Deactivated successfully. Apr 17 01:04:16.558578 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 01:04:16.558880 systemd[1]: session-9.scope: Consumed 3.537s CPU time, 219.8M memory peak. Apr 17 01:04:16.560218 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Apr 17 01:04:16.561859 systemd-logind[1875]: Removed session 9. Apr 17 01:04:19.942501 systemd[1]: Created slice kubepods-besteffort-pod4eaa9d62_362e_45b1_a4de_701950d5ff08.slice - libcontainer container kubepods-besteffort-pod4eaa9d62_362e_45b1_a4de_701950d5ff08.slice. Apr 17 01:04:20.028524 kubelet[3477]: I0417 01:04:20.028435 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4eaa9d62-362e-45b1-a4de-701950d5ff08-typha-certs\") pod \"calico-typha-7fb5f5fc4-wn8r9\" (UID: \"4eaa9d62-362e-45b1-a4de-701950d5ff08\") " pod="calico-system/calico-typha-7fb5f5fc4-wn8r9" Apr 17 01:04:20.028524 kubelet[3477]: I0417 01:04:20.028473 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eaa9d62-362e-45b1-a4de-701950d5ff08-tigera-ca-bundle\") pod \"calico-typha-7fb5f5fc4-wn8r9\" (UID: \"4eaa9d62-362e-45b1-a4de-701950d5ff08\") " pod="calico-system/calico-typha-7fb5f5fc4-wn8r9" Apr 17 01:04:20.028524 kubelet[3477]: I0417 01:04:20.028488 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785fn\" (UniqueName: \"kubernetes.io/projected/4eaa9d62-362e-45b1-a4de-701950d5ff08-kube-api-access-785fn\") pod \"calico-typha-7fb5f5fc4-wn8r9\" (UID: \"4eaa9d62-362e-45b1-a4de-701950d5ff08\") " pod="calico-system/calico-typha-7fb5f5fc4-wn8r9" Apr 17 01:04:20.075713 systemd[1]: Created slice kubepods-besteffort-podc996582e_487f_481d_a644_b415b41ea8b9.slice - libcontainer container kubepods-besteffort-podc996582e_487f_481d_a644_b415b41ea8b9.slice. Apr 17 01:04:20.129533 kubelet[3477]: I0417 01:04:20.129435 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-var-lib-calico\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.129783 kubelet[3477]: I0417 01:04:20.129542 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-flexvol-driver-host\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.129783 kubelet[3477]: I0417 01:04:20.129561 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-sys-fs\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.129783 kubelet[3477]: I0417 01:04:20.129574 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-var-run-calico\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.129783 kubelet[3477]: I0417 01:04:20.129662 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672gs\" (UniqueName: \"kubernetes.io/projected/c996582e-487f-481d-a644-b415b41ea8b9-kube-api-access-672gs\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.129783 kubelet[3477]: I0417 01:04:20.129686 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-lib-modules\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129701 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-nodeproc\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129899 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-xtables-lock\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129915 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-policysync\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129932 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-bpffs\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129945 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c996582e-487f-481d-a644-b415b41ea8b9-node-certs\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130208 kubelet[3477]: I0417 01:04:20.129966 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-cni-bin-dir\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130304 kubelet[3477]: I0417 01:04:20.129975 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-cni-log-dir\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130304 kubelet[3477]: I0417 01:04:20.129984 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c996582e-487f-481d-a644-b415b41ea8b9-cni-net-dir\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.130304 kubelet[3477]: I0417 01:04:20.129995 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c996582e-487f-481d-a644-b415b41ea8b9-tigera-ca-bundle\") pod \"calico-node-qbllz\" (UID: \"c996582e-487f-481d-a644-b415b41ea8b9\") " pod="calico-system/calico-node-qbllz" Apr 17 01:04:20.190255 kubelet[3477]: E0417 01:04:20.190116 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:20.230991 kubelet[3477]: I0417 01:04:20.230204 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/468ce8aa-5222-49ba-9343-72e4027cbefb-kubelet-dir\") pod \"csi-node-driver-kvcnk\" (UID: \"468ce8aa-5222-49ba-9343-72e4027cbefb\") " pod="calico-system/csi-node-driver-kvcnk" Apr 17 01:04:20.230991 kubelet[3477]: I0417 01:04:20.230239 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/468ce8aa-5222-49ba-9343-72e4027cbefb-varrun\") pod \"csi-node-driver-kvcnk\" (UID: \"468ce8aa-5222-49ba-9343-72e4027cbefb\") " pod="calico-system/csi-node-driver-kvcnk" Apr 17 01:04:20.230991 kubelet[3477]: I0417 01:04:20.230253 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpnz\" (UniqueName: \"kubernetes.io/projected/468ce8aa-5222-49ba-9343-72e4027cbefb-kube-api-access-9lpnz\") pod \"csi-node-driver-kvcnk\" (UID: \"468ce8aa-5222-49ba-9343-72e4027cbefb\") " pod="calico-system/csi-node-driver-kvcnk" Apr 17 01:04:20.230991 kubelet[3477]: I0417 01:04:20.230265 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/468ce8aa-5222-49ba-9343-72e4027cbefb-registration-dir\") pod \"csi-node-driver-kvcnk\" (UID: \"468ce8aa-5222-49ba-9343-72e4027cbefb\") " pod="calico-system/csi-node-driver-kvcnk" Apr 17 01:04:20.230991 kubelet[3477]: I0417 01:04:20.230337 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/468ce8aa-5222-49ba-9343-72e4027cbefb-socket-dir\") pod \"csi-node-driver-kvcnk\" (UID: \"468ce8aa-5222-49ba-9343-72e4027cbefb\") " pod="calico-system/csi-node-driver-kvcnk" Apr 17 01:04:20.237618 kubelet[3477]: E0417 01:04:20.237600 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.237756 kubelet[3477]: W0417 01:04:20.237703 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.237756 kubelet[3477]: E0417 01:04:20.237730 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.248762 containerd[1896]: time="2026-04-17T01:04:20.248331596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb5f5fc4-wn8r9,Uid:4eaa9d62-362e-45b1-a4de-701950d5ff08,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:20.265570 kubelet[3477]: E0417 01:04:20.265552 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.265692 kubelet[3477]: W0417 01:04:20.265647 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.265692 kubelet[3477]: E0417 01:04:20.265668 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.311505 containerd[1896]: time="2026-04-17T01:04:20.311418626Z" level=info msg="connecting to shim acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd" address="unix:///run/containerd/s/9b89c6cda32924a52c64218c597109ce47d32863e05ecbcc1949f44616467e69" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:20.328216 systemd[1]: Started cri-containerd-acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd.scope - libcontainer container acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd. Apr 17 01:04:20.331525 kubelet[3477]: E0417 01:04:20.331486 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.331696 kubelet[3477]: W0417 01:04:20.331502 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.331696 kubelet[3477]: E0417 01:04:20.331636 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.331972 kubelet[3477]: E0417 01:04:20.331961 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.332082 kubelet[3477]: W0417 01:04:20.332055 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.332277 kubelet[3477]: E0417 01:04:20.332130 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.332488 kubelet[3477]: E0417 01:04:20.332476 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.332672 kubelet[3477]: W0417 01:04:20.332520 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.332672 kubelet[3477]: E0417 01:04:20.332532 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.332934 kubelet[3477]: E0417 01:04:20.332896 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.332934 kubelet[3477]: W0417 01:04:20.332913 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.332934 kubelet[3477]: E0417 01:04:20.332922 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.333301 kubelet[3477]: E0417 01:04:20.333291 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.333411 kubelet[3477]: W0417 01:04:20.333363 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.333411 kubelet[3477]: E0417 01:04:20.333376 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.333755 kubelet[3477]: E0417 01:04:20.333743 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.333852 kubelet[3477]: W0417 01:04:20.333805 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.333852 kubelet[3477]: E0417 01:04:20.333818 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.334519 kubelet[3477]: E0417 01:04:20.334508 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.334697 kubelet[3477]: W0417 01:04:20.334632 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.334697 kubelet[3477]: E0417 01:04:20.334681 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.335205 kubelet[3477]: E0417 01:04:20.335169 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.335205 kubelet[3477]: W0417 01:04:20.335181 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.335205 kubelet[3477]: E0417 01:04:20.335192 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.336216 kubelet[3477]: E0417 01:04:20.336204 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.336490 kubelet[3477]: W0417 01:04:20.336469 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.336634 kubelet[3477]: E0417 01:04:20.336557 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.337227 kubelet[3477]: E0417 01:04:20.337159 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.337227 kubelet[3477]: W0417 01:04:20.337169 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.337227 kubelet[3477]: E0417 01:04:20.337188 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.337579 kubelet[3477]: E0417 01:04:20.337553 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.337662 kubelet[3477]: W0417 01:04:20.337565 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.337662 kubelet[3477]: E0417 01:04:20.337641 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.338218 kubelet[3477]: E0417 01:04:20.338167 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.338218 kubelet[3477]: W0417 01:04:20.338180 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.338218 kubelet[3477]: E0417 01:04:20.338189 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.338600 kubelet[3477]: E0417 01:04:20.338525 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.338600 kubelet[3477]: W0417 01:04:20.338537 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.338600 kubelet[3477]: E0417 01:04:20.338547 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.338860 kubelet[3477]: E0417 01:04:20.338823 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.338860 kubelet[3477]: W0417 01:04:20.338842 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.338860 kubelet[3477]: E0417 01:04:20.338852 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.339210 kubelet[3477]: E0417 01:04:20.339200 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.339302 kubelet[3477]: W0417 01:04:20.339254 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.339302 kubelet[3477]: E0417 01:04:20.339266 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.339539 kubelet[3477]: E0417 01:04:20.339527 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.339627 kubelet[3477]: W0417 01:04:20.339604 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.339627 kubelet[3477]: E0417 01:04:20.339618 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.339930 kubelet[3477]: E0417 01:04:20.339921 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.340023 kubelet[3477]: W0417 01:04:20.339998 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.340023 kubelet[3477]: E0417 01:04:20.340013 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.340358 kubelet[3477]: E0417 01:04:20.340324 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.340358 kubelet[3477]: W0417 01:04:20.340340 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.340358 kubelet[3477]: E0417 01:04:20.340348 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.340710 kubelet[3477]: E0417 01:04:20.340684 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.340710 kubelet[3477]: W0417 01:04:20.340693 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.340710 kubelet[3477]: E0417 01:04:20.340701 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.340964 kubelet[3477]: E0417 01:04:20.340938 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.340964 kubelet[3477]: W0417 01:04:20.340946 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.340964 kubelet[3477]: E0417 01:04:20.340954 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.341288 kubelet[3477]: E0417 01:04:20.341240 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.341288 kubelet[3477]: W0417 01:04:20.341249 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.341288 kubelet[3477]: E0417 01:04:20.341257 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.341685 kubelet[3477]: E0417 01:04:20.341660 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.341805 kubelet[3477]: W0417 01:04:20.341670 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.341805 kubelet[3477]: E0417 01:04:20.341750 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.342018 kubelet[3477]: E0417 01:04:20.341991 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.342018 kubelet[3477]: W0417 01:04:20.342000 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.342018 kubelet[3477]: E0417 01:04:20.342009 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.342330 kubelet[3477]: E0417 01:04:20.342320 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.342448 kubelet[3477]: W0417 01:04:20.342393 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.342448 kubelet[3477]: E0417 01:04:20.342409 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.342935 kubelet[3477]: E0417 01:04:20.342790 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.342935 kubelet[3477]: W0417 01:04:20.342800 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.342935 kubelet[3477]: E0417 01:04:20.342810 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.351205 kubelet[3477]: E0417 01:04:20.351191 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:20.351352 kubelet[3477]: W0417 01:04:20.351259 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:20.351352 kubelet[3477]: E0417 01:04:20.351274 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:20.361151 containerd[1896]: time="2026-04-17T01:04:20.361091900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb5f5fc4-wn8r9,Uid:4eaa9d62-362e-45b1-a4de-701950d5ff08,Namespace:calico-system,Attempt:0,} returns sandbox id \"acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd\"" Apr 17 01:04:20.362485 containerd[1896]: time="2026-04-17T01:04:20.362460856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 01:04:20.379260 containerd[1896]: time="2026-04-17T01:04:20.379233772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qbllz,Uid:c996582e-487f-481d-a644-b415b41ea8b9,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:20.905572 containerd[1896]: time="2026-04-17T01:04:20.905309326Z" level=info msg="connecting to shim 284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78" address="unix:///run/containerd/s/f64f6a5c2b34389c832637e550f7989b3f3a73ebb35e4f776938774e5c852ad5" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:20.923230 systemd[1]: Started cri-containerd-284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78.scope - libcontainer container 284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78. Apr 17 01:04:20.947574 containerd[1896]: time="2026-04-17T01:04:20.947531356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qbllz,Uid:c996582e-487f-481d-a644-b415b41ea8b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\"" Apr 17 01:04:21.722437 kubelet[3477]: E0417 01:04:21.722390 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:21.819047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022331860.mount: Deactivated successfully. Apr 17 01:04:22.367343 containerd[1896]: time="2026-04-17T01:04:22.367295649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:22.370972 containerd[1896]: time="2026-04-17T01:04:22.370945313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 01:04:22.374906 containerd[1896]: time="2026-04-17T01:04:22.374879889Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:22.380987 containerd[1896]: time="2026-04-17T01:04:22.380948650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:22.381710 containerd[1896]: time="2026-04-17T01:04:22.381341356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.018844099s" Apr 17 01:04:22.381710 containerd[1896]: time="2026-04-17T01:04:22.381368325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 01:04:22.382911 containerd[1896]: time="2026-04-17T01:04:22.382476642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 01:04:22.401066 containerd[1896]: time="2026-04-17T01:04:22.401011085Z" level=info msg="CreateContainer within sandbox \"acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 01:04:22.434327 containerd[1896]: time="2026-04-17T01:04:22.434261293Z" level=info msg="Container b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:22.456144 containerd[1896]: time="2026-04-17T01:04:22.456080910Z" level=info msg="CreateContainer within sandbox \"acdd763a70304a33e5f7094acea63a0bdfe10a3d47e766a4102637c4f5f3eafd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4\"" Apr 17 01:04:22.456954 containerd[1896]: time="2026-04-17T01:04:22.456758832Z" level=info msg="StartContainer for \"b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4\"" Apr 17 01:04:22.458013 containerd[1896]: time="2026-04-17T01:04:22.457982201Z" level=info msg="connecting to shim b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4" address="unix:///run/containerd/s/9b89c6cda32924a52c64218c597109ce47d32863e05ecbcc1949f44616467e69" protocol=ttrpc version=3 Apr 17 01:04:22.477226 systemd[1]: Started cri-containerd-b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4.scope - libcontainer container b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4. Apr 17 01:04:22.517160 containerd[1896]: time="2026-04-17T01:04:22.517126486Z" level=info msg="StartContainer for \"b0bec78c8ae558fe0ad13b02be7027c165a4f3a1cf720c9134b28995416160d4\" returns successfully" Apr 17 01:04:22.807643 kubelet[3477]: I0417 01:04:22.806924 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fb5f5fc4-wn8r9" podStartSLOduration=1.786519927 podStartE2EDuration="3.806911875s" podCreationTimestamp="2026-04-17 01:04:19 +0000 UTC" firstStartedPulling="2026-04-17 01:04:20.361966371 +0000 UTC m=+20.715681484" lastFinishedPulling="2026-04-17 01:04:22.382358319 +0000 UTC m=+22.736073432" observedRunningTime="2026-04-17 01:04:22.806668796 +0000 UTC m=+23.160383933" watchObservedRunningTime="2026-04-17 01:04:22.806911875 +0000 UTC m=+23.160626988" Apr 17 01:04:22.828062 kubelet[3477]: E0417 01:04:22.827957 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.828062 kubelet[3477]: W0417 01:04:22.827974 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.828062 kubelet[3477]: E0417 01:04:22.827989 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.828283 kubelet[3477]: E0417 01:04:22.828271 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.828419 kubelet[3477]: W0417 01:04:22.828316 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.828419 kubelet[3477]: E0417 01:04:22.828349 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.828605 kubelet[3477]: E0417 01:04:22.828594 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.828677 kubelet[3477]: W0417 01:04:22.828666 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.828726 kubelet[3477]: E0417 01:04:22.828717 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.829019 kubelet[3477]: E0417 01:04:22.828920 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.829019 kubelet[3477]: W0417 01:04:22.828930 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.829019 kubelet[3477]: E0417 01:04:22.828938 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.829181 kubelet[3477]: E0417 01:04:22.829169 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.829325 kubelet[3477]: W0417 01:04:22.829221 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.829325 kubelet[3477]: E0417 01:04:22.829233 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.829531 kubelet[3477]: E0417 01:04:22.829478 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.829531 kubelet[3477]: W0417 01:04:22.829489 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.829531 kubelet[3477]: E0417 01:04:22.829499 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.829779 kubelet[3477]: E0417 01:04:22.829728 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.829779 kubelet[3477]: W0417 01:04:22.829737 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.829779 kubelet[3477]: E0417 01:04:22.829745 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.830040 kubelet[3477]: E0417 01:04:22.829994 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.830040 kubelet[3477]: W0417 01:04:22.830006 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.830040 kubelet[3477]: E0417 01:04:22.830014 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.830312 kubelet[3477]: E0417 01:04:22.830301 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.830368 kubelet[3477]: W0417 01:04:22.830358 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.830419 kubelet[3477]: E0417 01:04:22.830410 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.830678 kubelet[3477]: E0417 01:04:22.830597 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.830678 kubelet[3477]: W0417 01:04:22.830606 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.830678 kubelet[3477]: E0417 01:04:22.830614 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.831414 kubelet[3477]: E0417 01:04:22.831085 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.831414 kubelet[3477]: W0417 01:04:22.831133 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.831414 kubelet[3477]: E0417 01:04:22.831146 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.831795 kubelet[3477]: E0417 01:04:22.831673 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.831795 kubelet[3477]: W0417 01:04:22.831686 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.831795 kubelet[3477]: E0417 01:04:22.831696 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.831940 kubelet[3477]: E0417 01:04:22.831929 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.832129 kubelet[3477]: W0417 01:04:22.831983 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.832129 kubelet[3477]: E0417 01:04:22.831997 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.832232 kubelet[3477]: E0417 01:04:22.832221 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.832372 kubelet[3477]: W0417 01:04:22.832273 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.832372 kubelet[3477]: E0417 01:04:22.832287 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.832489 kubelet[3477]: E0417 01:04:22.832478 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.832541 kubelet[3477]: W0417 01:04:22.832531 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.832590 kubelet[3477]: E0417 01:04:22.832579 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851003 kubelet[3477]: E0417 01:04:22.850982 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851003 kubelet[3477]: W0417 01:04:22.850998 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851003 kubelet[3477]: E0417 01:04:22.851008 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851230 kubelet[3477]: E0417 01:04:22.851198 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851230 kubelet[3477]: W0417 01:04:22.851208 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851230 kubelet[3477]: E0417 01:04:22.851216 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851404 kubelet[3477]: E0417 01:04:22.851350 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851404 kubelet[3477]: W0417 01:04:22.851356 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851404 kubelet[3477]: E0417 01:04:22.851363 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851561 kubelet[3477]: E0417 01:04:22.851482 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851561 kubelet[3477]: W0417 01:04:22.851489 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851561 kubelet[3477]: E0417 01:04:22.851496 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851591 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851829 kubelet[3477]: W0417 01:04:22.851596 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851601 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851676 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851829 kubelet[3477]: W0417 01:04:22.851680 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851685 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851785 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.851829 kubelet[3477]: W0417 01:04:22.851790 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.851829 kubelet[3477]: E0417 01:04:22.851795 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.852187 kubelet[3477]: E0417 01:04:22.852086 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.852187 kubelet[3477]: W0417 01:04:22.852104 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.852187 kubelet[3477]: E0417 01:04:22.852112 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.852401 kubelet[3477]: E0417 01:04:22.852240 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.852401 kubelet[3477]: W0417 01:04:22.852246 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.852401 kubelet[3477]: E0417 01:04:22.852252 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.852401 kubelet[3477]: E0417 01:04:22.852340 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.852401 kubelet[3477]: W0417 01:04:22.852345 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.852401 kubelet[3477]: E0417 01:04:22.852350 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.852819 kubelet[3477]: E0417 01:04:22.852426 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.852819 kubelet[3477]: W0417 01:04:22.852430 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.852819 kubelet[3477]: E0417 01:04:22.852434 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.852819 kubelet[3477]: E0417 01:04:22.852517 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.852819 kubelet[3477]: W0417 01:04:22.852521 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.852819 kubelet[3477]: E0417 01:04:22.852526 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.853302 kubelet[3477]: E0417 01:04:22.853142 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.853302 kubelet[3477]: W0417 01:04:22.853156 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.853302 kubelet[3477]: E0417 01:04:22.853167 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.853726 kubelet[3477]: E0417 01:04:22.853571 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.853726 kubelet[3477]: W0417 01:04:22.853665 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.853726 kubelet[3477]: E0417 01:04:22.853680 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.854259 kubelet[3477]: E0417 01:04:22.854073 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.854259 kubelet[3477]: W0417 01:04:22.854085 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.854259 kubelet[3477]: E0417 01:04:22.854176 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.854657 kubelet[3477]: E0417 01:04:22.854564 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.854657 kubelet[3477]: W0417 01:04:22.854577 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.854657 kubelet[3477]: E0417 01:04:22.854587 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.855234 kubelet[3477]: E0417 01:04:22.855217 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.855748 kubelet[3477]: W0417 01:04:22.855396 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.855748 kubelet[3477]: E0417 01:04:22.855414 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:22.855990 kubelet[3477]: E0417 01:04:22.855978 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:22.856061 kubelet[3477]: W0417 01:04:22.856050 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:22.856137 kubelet[3477]: E0417 01:04:22.856123 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.712112 containerd[1896]: time="2026-04-17T01:04:23.712051212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:23.720002 containerd[1896]: time="2026-04-17T01:04:23.719856379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 01:04:23.722090 kubelet[3477]: E0417 01:04:23.722058 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:23.726466 containerd[1896]: time="2026-04-17T01:04:23.726432945Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:23.732412 containerd[1896]: time="2026-04-17T01:04:23.732382294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:23.733184 containerd[1896]: time="2026-04-17T01:04:23.733158971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.350408618s" Apr 17 01:04:23.733213 containerd[1896]: time="2026-04-17T01:04:23.733186444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 01:04:23.741660 containerd[1896]: time="2026-04-17T01:04:23.741616059Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 01:04:23.765908 containerd[1896]: time="2026-04-17T01:04:23.764335604Z" level=info msg="Container 2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:23.790308 containerd[1896]: time="2026-04-17T01:04:23.790275986Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1\"" Apr 17 01:04:23.791124 containerd[1896]: time="2026-04-17T01:04:23.790660917Z" level=info msg="StartContainer for \"2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1\"" Apr 17 01:04:23.791890 containerd[1896]: time="2026-04-17T01:04:23.791869173Z" level=info msg="connecting to shim 2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1" address="unix:///run/containerd/s/f64f6a5c2b34389c832637e550f7989b3f3a73ebb35e4f776938774e5c852ad5" protocol=ttrpc version=3 Apr 17 01:04:23.799939 kubelet[3477]: I0417 01:04:23.799916 3477 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 01:04:23.809318 systemd[1]: Started cri-containerd-2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1.scope - libcontainer container 2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1. Apr 17 01:04:23.836648 kubelet[3477]: E0417 01:04:23.836623 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.836648 kubelet[3477]: W0417 01:04:23.836643 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.836926 kubelet[3477]: E0417 01:04:23.836660 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.836926 kubelet[3477]: E0417 01:04:23.836777 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.836926 kubelet[3477]: W0417 01:04:23.836783 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.836926 kubelet[3477]: E0417 01:04:23.836813 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.836926 kubelet[3477]: E0417 01:04:23.836907 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.836926 kubelet[3477]: W0417 01:04:23.836912 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.836926 kubelet[3477]: E0417 01:04:23.836917 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.837034 kubelet[3477]: E0417 01:04:23.836994 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.837034 kubelet[3477]: W0417 01:04:23.836998 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.837034 kubelet[3477]: E0417 01:04:23.837002 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.837082 kubelet[3477]: E0417 01:04:23.837077 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.837082 kubelet[3477]: W0417 01:04:23.837081 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.837137 kubelet[3477]: E0417 01:04:23.837085 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.838221 kubelet[3477]: E0417 01:04:23.838205 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.838347 kubelet[3477]: W0417 01:04:23.838289 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.838347 kubelet[3477]: E0417 01:04:23.838306 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.838667 kubelet[3477]: E0417 01:04:23.838598 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.838667 kubelet[3477]: W0417 01:04:23.838616 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.838667 kubelet[3477]: E0417 01:04:23.838629 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.838906 kubelet[3477]: E0417 01:04:23.838893 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.839048 kubelet[3477]: W0417 01:04:23.838986 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.839048 kubelet[3477]: E0417 01:04:23.839002 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.839331 kubelet[3477]: E0417 01:04:23.839319 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.839467 kubelet[3477]: W0417 01:04:23.839375 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.839467 kubelet[3477]: E0417 01:04:23.839389 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.839677 kubelet[3477]: E0417 01:04:23.839648 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.839677 kubelet[3477]: W0417 01:04:23.839659 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.839839 kubelet[3477]: E0417 01:04:23.839668 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.840057 kubelet[3477]: E0417 01:04:23.840022 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.840057 kubelet[3477]: W0417 01:04:23.840032 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.840227 kubelet[3477]: E0417 01:04:23.840153 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.840426 kubelet[3477]: E0417 01:04:23.840384 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.840426 kubelet[3477]: W0417 01:04:23.840394 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.840426 kubelet[3477]: E0417 01:04:23.840404 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.840676 kubelet[3477]: E0417 01:04:23.840658 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.840779 kubelet[3477]: W0417 01:04:23.840669 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.840779 kubelet[3477]: E0417 01:04:23.840740 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.840970 kubelet[3477]: E0417 01:04:23.840959 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.841124 kubelet[3477]: W0417 01:04:23.841020 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.841124 kubelet[3477]: E0417 01:04:23.841032 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.841320 kubelet[3477]: E0417 01:04:23.841309 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.841457 kubelet[3477]: W0417 01:04:23.841368 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.841457 kubelet[3477]: E0417 01:04:23.841383 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.858710 kubelet[3477]: E0417 01:04:23.858646 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.858865 kubelet[3477]: W0417 01:04:23.858793 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.859003 kubelet[3477]: E0417 01:04:23.858924 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.860305 kubelet[3477]: E0417 01:04:23.860172 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.860305 kubelet[3477]: W0417 01:04:23.860184 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.860305 kubelet[3477]: E0417 01:04:23.860203 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.860584 kubelet[3477]: E0417 01:04:23.860447 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.860584 kubelet[3477]: W0417 01:04:23.860458 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.860584 kubelet[3477]: E0417 01:04:23.860467 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.862278 kubelet[3477]: E0417 01:04:23.862264 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.862587 kubelet[3477]: W0417 01:04:23.862317 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.862587 kubelet[3477]: E0417 01:04:23.862332 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.862926 kubelet[3477]: E0417 01:04:23.862736 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.862926 kubelet[3477]: W0417 01:04:23.862752 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.862926 kubelet[3477]: E0417 01:04:23.862762 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.863193 kubelet[3477]: E0417 01:04:23.863137 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.863193 kubelet[3477]: W0417 01:04:23.863151 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.863193 kubelet[3477]: E0417 01:04:23.863178 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.863932 kubelet[3477]: E0417 01:04:23.863816 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.863932 kubelet[3477]: W0417 01:04:23.863831 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.863932 kubelet[3477]: E0417 01:04:23.863843 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.864259 kubelet[3477]: E0417 01:04:23.864212 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.864259 kubelet[3477]: W0417 01:04:23.864227 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.864518 kubelet[3477]: E0417 01:04:23.864425 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.865056 kubelet[3477]: E0417 01:04:23.864989 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.865056 kubelet[3477]: W0417 01:04:23.865029 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.865643 kubelet[3477]: E0417 01:04:23.865040 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.866430 kubelet[3477]: E0417 01:04:23.866415 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.866876 kubelet[3477]: W0417 01:04:23.866765 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.866876 kubelet[3477]: E0417 01:04:23.866814 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.867170 kubelet[3477]: E0417 01:04:23.867158 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.867412 kubelet[3477]: W0417 01:04:23.867244 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.867528 kubelet[3477]: E0417 01:04:23.867474 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.868505 kubelet[3477]: E0417 01:04:23.868416 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.868505 kubelet[3477]: W0417 01:04:23.868429 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.868505 kubelet[3477]: E0417 01:04:23.868440 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.868995 kubelet[3477]: E0417 01:04:23.868929 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.868995 kubelet[3477]: W0417 01:04:23.868942 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.868995 kubelet[3477]: E0417 01:04:23.868952 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.869392 kubelet[3477]: E0417 01:04:23.869379 3477 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 01:04:23.869692 kubelet[3477]: W0417 01:04:23.869461 3477 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 01:04:23.869692 kubelet[3477]: E0417 01:04:23.869477 3477 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 01:04:23.870417 systemd[1]: cri-containerd-2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1.scope: Deactivated successfully. Apr 17 01:04:23.876478 containerd[1896]: time="2026-04-17T01:04:23.876372329Z" level=info msg="received container exit event container_id:\"2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1\" id:\"2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1\" pid:4109 exited_at:{seconds:1776387863 nanos:872286221}" Apr 17 01:04:23.882243 containerd[1896]: time="2026-04-17T01:04:23.882214420Z" level=info msg="StartContainer for \"2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1\" returns successfully" Apr 17 01:04:23.892451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2efa58f8637e99d7bb64f36faaf31459371b49a08c658a2e870ca55afb3cf9f1-rootfs.mount: Deactivated successfully. Apr 17 01:04:25.722090 kubelet[3477]: E0417 01:04:25.721797 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:25.810372 containerd[1896]: time="2026-04-17T01:04:25.809634667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 01:04:27.721133 kubelet[3477]: E0417 01:04:27.720809 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:29.638959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59184995.mount: Deactivated successfully. Apr 17 01:04:29.722124 kubelet[3477]: E0417 01:04:29.721223 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:29.945693 containerd[1896]: time="2026-04-17T01:04:29.945647243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:29.954155 containerd[1896]: time="2026-04-17T01:04:29.953992333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 01:04:29.959017 containerd[1896]: time="2026-04-17T01:04:29.958983656Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:29.966303 containerd[1896]: time="2026-04-17T01:04:29.966148924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:29.966707 containerd[1896]: time="2026-04-17T01:04:29.966601687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.156450159s" Apr 17 01:04:29.966707 containerd[1896]: time="2026-04-17T01:04:29.966629712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 01:04:29.975479 containerd[1896]: time="2026-04-17T01:04:29.975450599Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 01:04:30.008238 containerd[1896]: time="2026-04-17T01:04:30.006234285Z" level=info msg="Container 80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:30.009218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666006333.mount: Deactivated successfully. Apr 17 01:04:30.027277 containerd[1896]: time="2026-04-17T01:04:30.027198250Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d\"" Apr 17 01:04:30.027802 containerd[1896]: time="2026-04-17T01:04:30.027769545Z" level=info msg="StartContainer for \"80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d\"" Apr 17 01:04:30.028876 containerd[1896]: time="2026-04-17T01:04:30.028847517Z" level=info msg="connecting to shim 80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d" address="unix:///run/containerd/s/f64f6a5c2b34389c832637e550f7989b3f3a73ebb35e4f776938774e5c852ad5" protocol=ttrpc version=3 Apr 17 01:04:30.044207 systemd[1]: Started cri-containerd-80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d.scope - libcontainer container 80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d. Apr 17 01:04:30.103309 containerd[1896]: time="2026-04-17T01:04:30.103273689Z" level=info msg="StartContainer for \"80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d\" returns successfully" Apr 17 01:04:30.128952 systemd[1]: cri-containerd-80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d.scope: Deactivated successfully. Apr 17 01:04:30.132242 containerd[1896]: time="2026-04-17T01:04:30.132204766Z" level=info msg="received container exit event container_id:\"80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d\" id:\"80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d\" pid:4202 exited_at:{seconds:1776387870 nanos:132030506}" Apr 17 01:04:30.638981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b368d5c56347b857315e41cc5af5c55a31b72c61c9f85aca5264437421ca0d-rootfs.mount: Deactivated successfully. Apr 17 01:04:31.723125 kubelet[3477]: E0417 01:04:31.722949 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:31.821663 containerd[1896]: time="2026-04-17T01:04:31.821357484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 01:04:33.723106 kubelet[3477]: E0417 01:04:33.721661 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:34.158800 containerd[1896]: time="2026-04-17T01:04:34.158224652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:34.162574 containerd[1896]: time="2026-04-17T01:04:34.162545813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 01:04:34.165980 containerd[1896]: time="2026-04-17T01:04:34.165938094Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:34.174703 containerd[1896]: time="2026-04-17T01:04:34.174674466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:34.175045 containerd[1896]: time="2026-04-17T01:04:34.175016827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.353623222s" Apr 17 01:04:34.175045 containerd[1896]: time="2026-04-17T01:04:34.175046172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 01:04:34.183460 containerd[1896]: time="2026-04-17T01:04:34.183433480Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 01:04:34.206118 containerd[1896]: time="2026-04-17T01:04:34.206075832Z" level=info msg="Container 18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:34.226190 containerd[1896]: time="2026-04-17T01:04:34.226138397Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768\"" Apr 17 01:04:34.228148 containerd[1896]: time="2026-04-17T01:04:34.227503265Z" level=info msg="StartContainer for \"18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768\"" Apr 17 01:04:34.228691 containerd[1896]: time="2026-04-17T01:04:34.228671008Z" level=info msg="connecting to shim 18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768" address="unix:///run/containerd/s/f64f6a5c2b34389c832637e550f7989b3f3a73ebb35e4f776938774e5c852ad5" protocol=ttrpc version=3 Apr 17 01:04:34.249534 systemd[1]: Started cri-containerd-18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768.scope - libcontainer container 18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768. Apr 17 01:04:34.305882 containerd[1896]: time="2026-04-17T01:04:34.305805339Z" level=info msg="StartContainer for \"18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768\" returns successfully" Apr 17 01:04:35.720928 kubelet[3477]: E0417 01:04:35.720879 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvcnk" podUID="468ce8aa-5222-49ba-9343-72e4027cbefb" Apr 17 01:04:36.374054 systemd[1]: cri-containerd-18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768.scope: Deactivated successfully. Apr 17 01:04:36.374711 systemd[1]: cri-containerd-18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768.scope: Consumed 342ms CPU time, 185M memory peak, 171.3M written to disk. Apr 17 01:04:36.376803 containerd[1896]: time="2026-04-17T01:04:36.376721351Z" level=info msg="received container exit event container_id:\"18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768\" id:\"18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768\" pid:4262 exited_at:{seconds:1776387876 nanos:376231178}" Apr 17 01:04:36.394736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18f8aa3e089be48ca9e1e89c7ab009fa9e58625ed78c57abc2f802fca80a4768-rootfs.mount: Deactivated successfully. Apr 17 01:04:36.473065 kubelet[3477]: I0417 01:04:36.473032 3477 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 01:04:36.523863 systemd[1]: Created slice kubepods-burstable-pod09a42fe5_6c41_4f94_bcf2_6a0d09909e86.slice - libcontainer container kubepods-burstable-pod09a42fe5_6c41_4f94_bcf2_6a0d09909e86.slice. Apr 17 01:04:36.537857 systemd[1]: Created slice kubepods-burstable-pod4c1b0f18_8d1b_40d1_89bf_431190b5aa06.slice - libcontainer container kubepods-burstable-pod4c1b0f18_8d1b_40d1_89bf_431190b5aa06.slice. Apr 17 01:04:36.546378 kubelet[3477]: I0417 01:04:36.546300 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a42fe5-6c41-4f94-bcf2-6a0d09909e86-config-volume\") pod \"coredns-674b8bbfcf-jfkv8\" (UID: \"09a42fe5-6c41-4f94-bcf2-6a0d09909e86\") " pod="kube-system/coredns-674b8bbfcf-jfkv8" Apr 17 01:04:36.546560 kubelet[3477]: I0417 01:04:36.546508 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h97lk\" (UniqueName: \"kubernetes.io/projected/09a42fe5-6c41-4f94-bcf2-6a0d09909e86-kube-api-access-h97lk\") pod \"coredns-674b8bbfcf-jfkv8\" (UID: \"09a42fe5-6c41-4f94-bcf2-6a0d09909e86\") " pod="kube-system/coredns-674b8bbfcf-jfkv8" Apr 17 01:04:36.550794 systemd[1]: Created slice kubepods-besteffort-podcfd8febc_5f0f_4b78_ab8b_3414e71a937d.slice - libcontainer container kubepods-besteffort-podcfd8febc_5f0f_4b78_ab8b_3414e71a937d.slice. Apr 17 01:04:36.565259 systemd[1]: Created slice kubepods-besteffort-pod069c8b41_ab4a_4241_8ec9_d15b87117e1a.slice - libcontainer container kubepods-besteffort-pod069c8b41_ab4a_4241_8ec9_d15b87117e1a.slice. Apr 17 01:04:36.573441 systemd[1]: Created slice kubepods-besteffort-pode05db025_e00a_4034_a01a_7c1725a2c655.slice - libcontainer container kubepods-besteffort-pode05db025_e00a_4034_a01a_7c1725a2c655.slice. Apr 17 01:04:36.581734 systemd[1]: Created slice kubepods-besteffort-pod3501e470_2bb6_401b_88aa_2d4b73e90ce2.slice - libcontainer container kubepods-besteffort-pod3501e470_2bb6_401b_88aa_2d4b73e90ce2.slice. Apr 17 01:04:36.586973 systemd[1]: Created slice kubepods-besteffort-pod443e29f3_1f34_4947_bc0a_c00737719e44.slice - libcontainer container kubepods-besteffort-pod443e29f3_1f34_4947_bc0a_c00737719e44.slice. Apr 17 01:04:36.648356 kubelet[3477]: I0417 01:04:36.647661 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1b0f18-8d1b-40d1-89bf-431190b5aa06-config-volume\") pod \"coredns-674b8bbfcf-fq27z\" (UID: \"4c1b0f18-8d1b-40d1-89bf-431190b5aa06\") " pod="kube-system/coredns-674b8bbfcf-fq27z" Apr 17 01:04:36.648587 kubelet[3477]: I0417 01:04:36.648405 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3501e470-2bb6-401b-88aa-2d4b73e90ce2-config\") pod \"goldmane-5b85766d88-gkl6l\" (UID: \"3501e470-2bb6-401b-88aa-2d4b73e90ce2\") " pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:36.648587 kubelet[3477]: I0417 01:04:36.648433 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-nginx-config\") pod \"whisker-7b784f6498-wzmzr\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:36.648786 kubelet[3477]: I0417 01:04:36.648674 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfd8febc-5f0f-4b78-ab8b-3414e71a937d-tigera-ca-bundle\") pod \"calico-kube-controllers-5f949c8fd5-hb8t6\" (UID: \"cfd8febc-5f0f-4b78-ab8b-3414e71a937d\") " pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" Apr 17 01:04:36.648786 kubelet[3477]: I0417 01:04:36.648699 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkzb4\" (UniqueName: \"kubernetes.io/projected/4c1b0f18-8d1b-40d1-89bf-431190b5aa06-kube-api-access-bkzb4\") pod \"coredns-674b8bbfcf-fq27z\" (UID: \"4c1b0f18-8d1b-40d1-89bf-431190b5aa06\") " pod="kube-system/coredns-674b8bbfcf-fq27z" Apr 17 01:04:36.648786 kubelet[3477]: I0417 01:04:36.648750 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qnjc\" (UniqueName: \"kubernetes.io/projected/3501e470-2bb6-401b-88aa-2d4b73e90ce2-kube-api-access-9qnjc\") pod \"goldmane-5b85766d88-gkl6l\" (UID: \"3501e470-2bb6-401b-88aa-2d4b73e90ce2\") " pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:36.648954 kubelet[3477]: I0417 01:04:36.648903 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-backend-key-pair\") pod \"whisker-7b784f6498-wzmzr\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:36.648954 kubelet[3477]: I0417 01:04:36.648925 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e05db025-e00a-4034-a01a-7c1725a2c655-calico-apiserver-certs\") pod \"calico-apiserver-5fbdc68667-gbgr5\" (UID: \"e05db025-e00a-4034-a01a-7c1725a2c655\") " pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" Apr 17 01:04:36.649286 kubelet[3477]: I0417 01:04:36.648942 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlz7\" (UniqueName: \"kubernetes.io/projected/069c8b41-ab4a-4241-8ec9-d15b87117e1a-kube-api-access-ghlz7\") pod \"calico-apiserver-5fbdc68667-h58vw\" (UID: \"069c8b41-ab4a-4241-8ec9-d15b87117e1a\") " pod="calico-system/calico-apiserver-5fbdc68667-h58vw" Apr 17 01:04:36.649286 kubelet[3477]: I0417 01:04:36.649263 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/443e29f3-1f34-4947-bc0a-c00737719e44-kube-api-access-x2hr2\") pod \"whisker-7b784f6498-wzmzr\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:36.650604 kubelet[3477]: I0417 01:04:36.650551 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3501e470-2bb6-401b-88aa-2d4b73e90ce2-goldmane-key-pair\") pod \"goldmane-5b85766d88-gkl6l\" (UID: \"3501e470-2bb6-401b-88aa-2d4b73e90ce2\") " pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:36.650803 kubelet[3477]: I0417 01:04:36.650688 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tts2q\" (UniqueName: \"kubernetes.io/projected/cfd8febc-5f0f-4b78-ab8b-3414e71a937d-kube-api-access-tts2q\") pod \"calico-kube-controllers-5f949c8fd5-hb8t6\" (UID: \"cfd8febc-5f0f-4b78-ab8b-3414e71a937d\") " pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" Apr 17 01:04:36.650803 kubelet[3477]: I0417 01:04:36.650711 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-ca-bundle\") pod \"whisker-7b784f6498-wzmzr\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:36.650803 kubelet[3477]: I0417 01:04:36.650723 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bx4t\" (UniqueName: \"kubernetes.io/projected/e05db025-e00a-4034-a01a-7c1725a2c655-kube-api-access-8bx4t\") pod \"calico-apiserver-5fbdc68667-gbgr5\" (UID: \"e05db025-e00a-4034-a01a-7c1725a2c655\") " pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" Apr 17 01:04:36.651860 kubelet[3477]: I0417 01:04:36.651293 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3501e470-2bb6-401b-88aa-2d4b73e90ce2-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-gkl6l\" (UID: \"3501e470-2bb6-401b-88aa-2d4b73e90ce2\") " pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:36.652013 kubelet[3477]: I0417 01:04:36.651994 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/069c8b41-ab4a-4241-8ec9-d15b87117e1a-calico-apiserver-certs\") pod \"calico-apiserver-5fbdc68667-h58vw\" (UID: \"069c8b41-ab4a-4241-8ec9-d15b87117e1a\") " pod="calico-system/calico-apiserver-5fbdc68667-h58vw" Apr 17 01:04:36.831748 containerd[1896]: time="2026-04-17T01:04:36.831703775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfkv8,Uid:09a42fe5-6c41-4f94-bcf2-6a0d09909e86,Namespace:kube-system,Attempt:0,}" Apr 17 01:04:36.844041 containerd[1896]: time="2026-04-17T01:04:36.843631448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fq27z,Uid:4c1b0f18-8d1b-40d1-89bf-431190b5aa06,Namespace:kube-system,Attempt:0,}" Apr 17 01:04:36.854337 containerd[1896]: time="2026-04-17T01:04:36.854242103Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 01:04:36.865057 containerd[1896]: time="2026-04-17T01:04:36.865027035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f949c8fd5-hb8t6,Uid:cfd8febc-5f0f-4b78-ab8b-3414e71a937d,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:36.870703 containerd[1896]: time="2026-04-17T01:04:36.870673831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-h58vw,Uid:069c8b41-ab4a-4241-8ec9-d15b87117e1a,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:36.881035 containerd[1896]: time="2026-04-17T01:04:36.880996518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-gbgr5,Uid:e05db025-e00a-4034-a01a-7c1725a2c655,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:36.885601 containerd[1896]: time="2026-04-17T01:04:36.885578519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gkl6l,Uid:3501e470-2bb6-401b-88aa-2d4b73e90ce2,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:36.894802 containerd[1896]: time="2026-04-17T01:04:36.894771080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b784f6498-wzmzr,Uid:443e29f3-1f34-4947-bc0a-c00737719e44,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:36.911125 containerd[1896]: time="2026-04-17T01:04:36.911021700Z" level=error msg="Failed to destroy network for sandbox \"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:36.916165 containerd[1896]: time="2026-04-17T01:04:36.915787761Z" level=error msg="Failed to destroy network for sandbox \"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:36.994893 containerd[1896]: time="2026-04-17T01:04:36.994852175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fq27z,Uid:4c1b0f18-8d1b-40d1-89bf-431190b5aa06,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:36.995302 kubelet[3477]: E0417 01:04:36.995270 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:36.995641 kubelet[3477]: E0417 01:04:36.995622 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fq27z" Apr 17 01:04:36.995779 kubelet[3477]: E0417 01:04:36.995715 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fq27z" Apr 17 01:04:36.996628 kubelet[3477]: E0417 01:04:36.996163 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fq27z_kube-system(4c1b0f18-8d1b-40d1-89bf-431190b5aa06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fq27z_kube-system(4c1b0f18-8d1b-40d1-89bf-431190b5aa06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66f7a5c6e38a3dce94ea1e9b4abfc4ffa27d7bdb78263153a539df515ef4a7a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fq27z" podUID="4c1b0f18-8d1b-40d1-89bf-431190b5aa06" Apr 17 01:04:37.002367 containerd[1896]: time="2026-04-17T01:04:37.002169640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfkv8,Uid:09a42fe5-6c41-4f94-bcf2-6a0d09909e86,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.003026 kubelet[3477]: E0417 01:04:37.002726 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.003026 kubelet[3477]: E0417 01:04:37.002973 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jfkv8" Apr 17 01:04:37.003026 kubelet[3477]: E0417 01:04:37.002993 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jfkv8" Apr 17 01:04:37.003318 kubelet[3477]: E0417 01:04:37.003194 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jfkv8_kube-system(09a42fe5-6c41-4f94-bcf2-6a0d09909e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jfkv8_kube-system(09a42fe5-6c41-4f94-bcf2-6a0d09909e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48723e9bc5e439bf09df93e493c73d55c2a9b4757170a86d14bac4f1ea784307\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jfkv8" podUID="09a42fe5-6c41-4f94-bcf2-6a0d09909e86" Apr 17 01:04:37.016052 containerd[1896]: time="2026-04-17T01:04:37.015819302Z" level=info msg="Container 71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:37.031308 containerd[1896]: time="2026-04-17T01:04:37.031222523Z" level=error msg="Failed to destroy network for sandbox \"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.050316 containerd[1896]: time="2026-04-17T01:04:37.050283352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f949c8fd5-hb8t6,Uid:cfd8febc-5f0f-4b78-ab8b-3414e71a937d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.051715 kubelet[3477]: E0417 01:04:37.050566 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.051715 kubelet[3477]: E0417 01:04:37.050700 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" Apr 17 01:04:37.051715 kubelet[3477]: E0417 01:04:37.050732 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" Apr 17 01:04:37.051831 kubelet[3477]: E0417 01:04:37.050769 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f949c8fd5-hb8t6_calico-system(cfd8febc-5f0f-4b78-ab8b-3414e71a937d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f949c8fd5-hb8t6_calico-system(cfd8febc-5f0f-4b78-ab8b-3414e71a937d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f762d2fafc4fea69c06da8e5f9cd6f7fe159a3ae50647024eff192b5f45d17d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" podUID="cfd8febc-5f0f-4b78-ab8b-3414e71a937d" Apr 17 01:04:37.056851 containerd[1896]: time="2026-04-17T01:04:37.056822892Z" level=info msg="CreateContainer within sandbox \"284bd776d8d0942344e8d66b783b3cbf0d059350298840911591e174cd938f78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50\"" Apr 17 01:04:37.058059 containerd[1896]: time="2026-04-17T01:04:37.058028132Z" level=info msg="StartContainer for \"71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50\"" Apr 17 01:04:37.060106 containerd[1896]: time="2026-04-17T01:04:37.060073170Z" level=info msg="connecting to shim 71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50" address="unix:///run/containerd/s/f64f6a5c2b34389c832637e550f7989b3f3a73ebb35e4f776938774e5c852ad5" protocol=ttrpc version=3 Apr 17 01:04:37.074674 containerd[1896]: time="2026-04-17T01:04:37.074404074Z" level=error msg="Failed to destroy network for sandbox \"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.078444 containerd[1896]: time="2026-04-17T01:04:37.078420900Z" level=error msg="Failed to destroy network for sandbox \"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.081625 containerd[1896]: time="2026-04-17T01:04:37.081597559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-h58vw,Uid:069c8b41-ab4a-4241-8ec9-d15b87117e1a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.082197 kubelet[3477]: E0417 01:04:37.082169 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.082274 kubelet[3477]: E0417 01:04:37.082206 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fbdc68667-h58vw" Apr 17 01:04:37.082274 kubelet[3477]: E0417 01:04:37.082221 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fbdc68667-h58vw" Apr 17 01:04:37.082274 kubelet[3477]: E0417 01:04:37.082257 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fbdc68667-h58vw_calico-system(069c8b41-ab4a-4241-8ec9-d15b87117e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fbdc68667-h58vw_calico-system(069c8b41-ab4a-4241-8ec9-d15b87117e1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0000a9499b03b48c930c5cfcb73b27441b1e92b65f69015ceb89ede56e54e955\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fbdc68667-h58vw" podUID="069c8b41-ab4a-4241-8ec9-d15b87117e1a" Apr 17 01:04:37.085192 containerd[1896]: time="2026-04-17T01:04:37.085165173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b784f6498-wzmzr,Uid:443e29f3-1f34-4947-bc0a-c00737719e44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.087281 kubelet[3477]: E0417 01:04:37.086263 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.087372 kubelet[3477]: E0417 01:04:37.087296 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:37.087372 kubelet[3477]: E0417 01:04:37.087312 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b784f6498-wzmzr" Apr 17 01:04:37.087372 kubelet[3477]: E0417 01:04:37.087345 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b784f6498-wzmzr_calico-system(443e29f3-1f34-4947-bc0a-c00737719e44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b784f6498-wzmzr_calico-system(443e29f3-1f34-4947-bc0a-c00737719e44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00e1e128f83f5976fdf0bbff5bb595419e6c8096a537e53c1f5415b7fd497134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b784f6498-wzmzr" podUID="443e29f3-1f34-4947-bc0a-c00737719e44" Apr 17 01:04:37.090310 systemd[1]: Started cri-containerd-71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50.scope - libcontainer container 71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50. Apr 17 01:04:37.112840 containerd[1896]: time="2026-04-17T01:04:37.112806708Z" level=error msg="Failed to destroy network for sandbox \"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.112995 containerd[1896]: time="2026-04-17T01:04:37.112971520Z" level=error msg="Failed to destroy network for sandbox \"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.116891 containerd[1896]: time="2026-04-17T01:04:37.116860110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-gbgr5,Uid:e05db025-e00a-4034-a01a-7c1725a2c655,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.117214 kubelet[3477]: E0417 01:04:37.117180 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.117371 kubelet[3477]: E0417 01:04:37.117316 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" Apr 17 01:04:37.117371 kubelet[3477]: E0417 01:04:37.117345 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" Apr 17 01:04:37.117504 kubelet[3477]: E0417 01:04:37.117482 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fbdc68667-gbgr5_calico-system(e05db025-e00a-4034-a01a-7c1725a2c655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fbdc68667-gbgr5_calico-system(e05db025-e00a-4034-a01a-7c1725a2c655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed217eaf9305b5f7aece6a8f7a3d003a442e6bf83429ed96ce083446f01f2b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" podUID="e05db025-e00a-4034-a01a-7c1725a2c655" Apr 17 01:04:37.120671 containerd[1896]: time="2026-04-17T01:04:37.120554103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gkl6l,Uid:3501e470-2bb6-401b-88aa-2d4b73e90ce2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.121150 kubelet[3477]: E0417 01:04:37.121072 3477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 01:04:37.121370 kubelet[3477]: E0417 01:04:37.121167 3477 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:37.121370 kubelet[3477]: E0417 01:04:37.121180 3477 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gkl6l" Apr 17 01:04:37.121370 kubelet[3477]: E0417 01:04:37.121219 3477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-gkl6l_calico-system(3501e470-2bb6-401b-88aa-2d4b73e90ce2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-gkl6l_calico-system(3501e470-2bb6-401b-88aa-2d4b73e90ce2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3e318e795af626c3a81678c004478a65684e2df9f6c6c3b8c9d9f47ecc04148\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gkl6l" podUID="3501e470-2bb6-401b-88aa-2d4b73e90ce2" Apr 17 01:04:37.145063 containerd[1896]: time="2026-04-17T01:04:37.145040923Z" level=info msg="StartContainer for \"71b2e7654cd547fc962e79db6766b387685d0aebaf4162a7301ebf49c27e5d50\" returns successfully" Apr 17 01:04:37.725423 systemd[1]: Created slice kubepods-besteffort-pod468ce8aa_5222_49ba_9343_72e4027cbefb.slice - libcontainer container kubepods-besteffort-pod468ce8aa_5222_49ba_9343_72e4027cbefb.slice. Apr 17 01:04:37.727384 containerd[1896]: time="2026-04-17T01:04:37.727349606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvcnk,Uid:468ce8aa-5222-49ba-9343-72e4027cbefb,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:37.837104 systemd-networkd[1484]: calibbae7586740: Link UP Apr 17 01:04:37.837655 systemd-networkd[1484]: calibbae7586740: Gained carrier Apr 17 01:04:37.859898 containerd[1896]: 2026-04-17 01:04:37.753 [ERROR][4546] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 01:04:37.859898 containerd[1896]: 2026-04-17 01:04:37.764 [INFO][4546] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0 csi-node-driver- calico-system 468ce8aa-5222-49ba-9343-72e4027cbefb 704 0 2026-04-17 01:04:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 csi-node-driver-kvcnk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibbae7586740 [] [] }} ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-" Apr 17 01:04:37.859898 containerd[1896]: 2026-04-17 01:04:37.764 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.859898 containerd[1896]: 2026-04-17 01:04:37.781 [INFO][4558] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" HandleID="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Workload="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.785 [INFO][4558] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" HandleID="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Workload="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"csi-node-driver-kvcnk", "timestamp":"2026-04-17 01:04:37.781057777 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030d080)} Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.785 [INFO][4558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.785 [INFO][4558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.785 [INFO][4558] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.787 [INFO][4558] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.790 [INFO][4558] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.793 [INFO][4558] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.794 [INFO][4558] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860077 containerd[1896]: 2026-04-17 01:04:37.796 [INFO][4558] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.796 [INFO][4558] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.797 [INFO][4558] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9 Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.801 [INFO][4558] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.805 [INFO][4558] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.1/26] block=192.168.113.0/26 handle="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.806 [INFO][4558] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.1/26] handle="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.806 [INFO][4558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:37.860342 containerd[1896]: 2026-04-17 01:04:37.806 [INFO][4558] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.1/26] IPv6=[] ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" HandleID="k8s-pod-network.3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Workload="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.860882 containerd[1896]: 2026-04-17 01:04:37.808 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"468ce8aa-5222-49ba-9343-72e4027cbefb", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"csi-node-driver-kvcnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbae7586740", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:37.860926 containerd[1896]: 2026-04-17 01:04:37.808 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.1/32] ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.860926 containerd[1896]: 2026-04-17 01:04:37.808 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbae7586740 ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.860926 containerd[1896]: 2026-04-17 01:04:37.838 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.860970 containerd[1896]: 2026-04-17 01:04:37.839 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"468ce8aa-5222-49ba-9343-72e4027cbefb", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9", Pod:"csi-node-driver-kvcnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbae7586740", MAC:"4a:92:1d:d2:06:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:37.861003 containerd[1896]: 2026-04-17 01:04:37.854 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" Namespace="calico-system" Pod="csi-node-driver-kvcnk" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-csi--node--driver--kvcnk-eth0" Apr 17 01:04:37.882766 kubelet[3477]: I0417 01:04:37.882708 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qbllz" podStartSLOduration=4.65526954 podStartE2EDuration="17.882693657s" podCreationTimestamp="2026-04-17 01:04:20 +0000 UTC" firstStartedPulling="2026-04-17 01:04:20.948510702 +0000 UTC m=+21.302225815" lastFinishedPulling="2026-04-17 01:04:34.175934819 +0000 UTC m=+34.529649932" observedRunningTime="2026-04-17 01:04:37.881071246 +0000 UTC m=+38.234786375" watchObservedRunningTime="2026-04-17 01:04:37.882693657 +0000 UTC m=+38.236408770" Apr 17 01:04:37.946534 containerd[1896]: time="2026-04-17T01:04:37.944952981Z" level=info msg="connecting to shim 3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9" address="unix:///run/containerd/s/8f60a19316e583f642b556d88a09b88b3a796cdb1386097001b55ddaad876ca6" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:37.961731 kubelet[3477]: I0417 01:04:37.961241 3477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-backend-key-pair\") pod \"443e29f3-1f34-4947-bc0a-c00737719e44\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " Apr 17 01:04:37.961833 kubelet[3477]: I0417 01:04:37.961756 3477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-ca-bundle\") pod \"443e29f3-1f34-4947-bc0a-c00737719e44\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " Apr 17 01:04:37.961833 kubelet[3477]: I0417 01:04:37.961804 3477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-nginx-config\") pod \"443e29f3-1f34-4947-bc0a-c00737719e44\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " Apr 17 01:04:37.961833 kubelet[3477]: I0417 01:04:37.961818 3477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/443e29f3-1f34-4947-bc0a-c00737719e44-kube-api-access-x2hr2\") pod \"443e29f3-1f34-4947-bc0a-c00737719e44\" (UID: \"443e29f3-1f34-4947-bc0a-c00737719e44\") " Apr 17 01:04:37.962588 kubelet[3477]: I0417 01:04:37.962565 3477 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "443e29f3-1f34-4947-bc0a-c00737719e44" (UID: "443e29f3-1f34-4947-bc0a-c00737719e44"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 01:04:37.962739 kubelet[3477]: I0417 01:04:37.962724 3477 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "443e29f3-1f34-4947-bc0a-c00737719e44" (UID: "443e29f3-1f34-4947-bc0a-c00737719e44"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 01:04:37.965986 systemd[1]: var-lib-kubelet-pods-443e29f3\x2d1f34\x2d4947\x2dbc0a\x2dc00737719e44-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 01:04:37.966486 kubelet[3477]: I0417 01:04:37.966437 3477 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-nginx-config\") on node \"ci-4459.2.4-n-25f3036c32\" DevicePath \"\"" Apr 17 01:04:37.966486 kubelet[3477]: I0417 01:04:37.966454 3477 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-ca-bundle\") on node \"ci-4459.2.4-n-25f3036c32\" DevicePath \"\"" Apr 17 01:04:37.968969 kubelet[3477]: I0417 01:04:37.968746 3477 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "443e29f3-1f34-4947-bc0a-c00737719e44" (UID: "443e29f3-1f34-4947-bc0a-c00737719e44"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 01:04:37.970336 kubelet[3477]: I0417 01:04:37.970317 3477 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443e29f3-1f34-4947-bc0a-c00737719e44-kube-api-access-x2hr2" (OuterVolumeSpecName: "kube-api-access-x2hr2") pod "443e29f3-1f34-4947-bc0a-c00737719e44" (UID: "443e29f3-1f34-4947-bc0a-c00737719e44"). InnerVolumeSpecName "kube-api-access-x2hr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 01:04:37.976244 systemd[1]: Started cri-containerd-3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9.scope - libcontainer container 3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9. Apr 17 01:04:38.004058 containerd[1896]: time="2026-04-17T01:04:38.004030078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvcnk,Uid:468ce8aa-5222-49ba-9343-72e4027cbefb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9\"" Apr 17 01:04:38.007686 containerd[1896]: time="2026-04-17T01:04:38.007307700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 01:04:38.066941 kubelet[3477]: I0417 01:04:38.066907 3477 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/443e29f3-1f34-4947-bc0a-c00737719e44-kube-api-access-x2hr2\") on node \"ci-4459.2.4-n-25f3036c32\" DevicePath \"\"" Apr 17 01:04:38.066941 kubelet[3477]: I0417 01:04:38.066939 3477 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/443e29f3-1f34-4947-bc0a-c00737719e44-whisker-backend-key-pair\") on node \"ci-4459.2.4-n-25f3036c32\" DevicePath \"\"" Apr 17 01:04:38.395354 systemd[1]: var-lib-kubelet-pods-443e29f3\x2d1f34\x2d4947\x2dbc0a\x2dc00737719e44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx2hr2.mount: Deactivated successfully. Apr 17 01:04:38.861855 systemd[1]: Removed slice kubepods-besteffort-pod443e29f3_1f34_4947_bc0a_c00737719e44.slice - libcontainer container kubepods-besteffort-pod443e29f3_1f34_4947_bc0a_c00737719e44.slice. Apr 17 01:04:38.954595 systemd[1]: Created slice kubepods-besteffort-pod77ee4db0_8461_4b4b_9ba8_e7e97aa75805.slice - libcontainer container kubepods-besteffort-pod77ee4db0_8461_4b4b_9ba8_e7e97aa75805.slice. Apr 17 01:04:39.018668 systemd-networkd[1484]: vxlan.calico: Link UP Apr 17 01:04:39.018674 systemd-networkd[1484]: vxlan.calico: Gained carrier Apr 17 01:04:39.057247 systemd-networkd[1484]: calibbae7586740: Gained IPv6LL Apr 17 01:04:39.073399 kubelet[3477]: I0417 01:04:39.073216 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/77ee4db0-8461-4b4b-9ba8-e7e97aa75805-nginx-config\") pod \"whisker-6555b4779-h885h\" (UID: \"77ee4db0-8461-4b4b-9ba8-e7e97aa75805\") " pod="calico-system/whisker-6555b4779-h885h" Apr 17 01:04:39.073399 kubelet[3477]: I0417 01:04:39.073251 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77ee4db0-8461-4b4b-9ba8-e7e97aa75805-whisker-backend-key-pair\") pod \"whisker-6555b4779-h885h\" (UID: \"77ee4db0-8461-4b4b-9ba8-e7e97aa75805\") " pod="calico-system/whisker-6555b4779-h885h" Apr 17 01:04:39.073399 kubelet[3477]: I0417 01:04:39.073277 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77ee4db0-8461-4b4b-9ba8-e7e97aa75805-whisker-ca-bundle\") pod \"whisker-6555b4779-h885h\" (UID: \"77ee4db0-8461-4b4b-9ba8-e7e97aa75805\") " pod="calico-system/whisker-6555b4779-h885h" Apr 17 01:04:39.073399 kubelet[3477]: I0417 01:04:39.073291 3477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gt7\" (UniqueName: \"kubernetes.io/projected/77ee4db0-8461-4b4b-9ba8-e7e97aa75805-kube-api-access-68gt7\") pod \"whisker-6555b4779-h885h\" (UID: \"77ee4db0-8461-4b4b-9ba8-e7e97aa75805\") " pod="calico-system/whisker-6555b4779-h885h" Apr 17 01:04:39.260378 containerd[1896]: time="2026-04-17T01:04:39.259860200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6555b4779-h885h,Uid:77ee4db0-8461-4b4b-9ba8-e7e97aa75805,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:39.383062 systemd-networkd[1484]: cali0c3faf27bfb: Link UP Apr 17 01:04:39.383764 systemd-networkd[1484]: cali0c3faf27bfb: Gained carrier Apr 17 01:04:39.404660 containerd[1896]: 2026-04-17 01:04:39.307 [INFO][4872] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0 whisker-6555b4779- calico-system 77ee4db0-8461-4b4b-9ba8-e7e97aa75805 896 0 2026-04-17 01:04:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6555b4779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 whisker-6555b4779-h885h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0c3faf27bfb [] [] }} ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-" Apr 17 01:04:39.404660 containerd[1896]: 2026-04-17 01:04:39.307 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.404660 containerd[1896]: 2026-04-17 01:04:39.332 [INFO][4888] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" HandleID="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Workload="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.339 [INFO][4888] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" HandleID="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Workload="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f9890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"whisker-6555b4779-h885h", "timestamp":"2026-04-17 01:04:39.332010961 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000620160)} Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.339 [INFO][4888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.339 [INFO][4888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.339 [INFO][4888] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.342 [INFO][4888] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.347 [INFO][4888] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.352 [INFO][4888] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.353 [INFO][4888] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.404815 containerd[1896]: 2026-04-17 01:04:39.356 [INFO][4888] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.356 [INFO][4888] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.358 [INFO][4888] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5 Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.362 [INFO][4888] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.372 [INFO][4888] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.2/26] block=192.168.113.0/26 handle="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.372 [INFO][4888] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.2/26] handle="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.372 [INFO][4888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:39.405355 containerd[1896]: 2026-04-17 01:04:39.372 [INFO][4888] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.2/26] IPv6=[] ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" HandleID="k8s-pod-network.bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Workload="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.405514 containerd[1896]: 2026-04-17 01:04:39.377 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0", GenerateName:"whisker-6555b4779-", Namespace:"calico-system", SelfLink:"", UID:"77ee4db0-8461-4b4b-9ba8-e7e97aa75805", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6555b4779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"whisker-6555b4779-h885h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0c3faf27bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:39.405514 containerd[1896]: 2026-04-17 01:04:39.377 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.2/32] ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.405571 containerd[1896]: 2026-04-17 01:04:39.377 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c3faf27bfb ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.405571 containerd[1896]: 2026-04-17 01:04:39.388 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.405599 containerd[1896]: 2026-04-17 01:04:39.388 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0", GenerateName:"whisker-6555b4779-", Namespace:"calico-system", SelfLink:"", UID:"77ee4db0-8461-4b4b-9ba8-e7e97aa75805", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6555b4779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5", Pod:"whisker-6555b4779-h885h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0c3faf27bfb", MAC:"0a:3d:f8:d2:46:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:39.405633 containerd[1896]: 2026-04-17 01:04:39.402 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" Namespace="calico-system" Pod="whisker-6555b4779-h885h" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-whisker--6555b4779--h885h-eth0" Apr 17 01:04:39.450535 containerd[1896]: time="2026-04-17T01:04:39.450493483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:39.464074 containerd[1896]: time="2026-04-17T01:04:39.464039303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 01:04:39.474662 containerd[1896]: time="2026-04-17T01:04:39.474628518Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:39.489613 containerd[1896]: time="2026-04-17T01:04:39.489576943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:39.490040 containerd[1896]: time="2026-04-17T01:04:39.490009426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.482666925s" Apr 17 01:04:39.490040 containerd[1896]: time="2026-04-17T01:04:39.490033907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 01:04:39.492408 containerd[1896]: time="2026-04-17T01:04:39.491788385Z" level=info msg="connecting to shim bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5" address="unix:///run/containerd/s/88556552f5f67220c6870144da124867449fe5566b586b76f2005ed08593c8cb" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:39.503516 containerd[1896]: time="2026-04-17T01:04:39.503484124Z" level=info msg="CreateContainer within sandbox \"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 01:04:39.521252 systemd[1]: Started cri-containerd-bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5.scope - libcontainer container bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5. Apr 17 01:04:39.540179 containerd[1896]: time="2026-04-17T01:04:39.539649027Z" level=info msg="Container efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:39.565760 containerd[1896]: time="2026-04-17T01:04:39.565675351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6555b4779-h885h,Uid:77ee4db0-8461-4b4b-9ba8-e7e97aa75805,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5\"" Apr 17 01:04:39.566805 containerd[1896]: time="2026-04-17T01:04:39.566785364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 01:04:39.584816 containerd[1896]: time="2026-04-17T01:04:39.584783477Z" level=info msg="CreateContainer within sandbox \"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3\"" Apr 17 01:04:39.585407 containerd[1896]: time="2026-04-17T01:04:39.585360941Z" level=info msg="StartContainer for \"efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3\"" Apr 17 01:04:39.586835 containerd[1896]: time="2026-04-17T01:04:39.586810571Z" level=info msg="connecting to shim efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3" address="unix:///run/containerd/s/8f60a19316e583f642b556d88a09b88b3a796cdb1386097001b55ddaad876ca6" protocol=ttrpc version=3 Apr 17 01:04:39.606209 systemd[1]: Started cri-containerd-efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3.scope - libcontainer container efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3. Apr 17 01:04:39.661228 containerd[1896]: time="2026-04-17T01:04:39.661203862Z" level=info msg="StartContainer for \"efb6138fa4d0bdf0fd05b578a4f6a3ab40b717b719ee711944950dfe358a65e3\" returns successfully" Apr 17 01:04:39.723463 kubelet[3477]: I0417 01:04:39.723434 3477 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443e29f3-1f34-4947-bc0a-c00737719e44" path="/var/lib/kubelet/pods/443e29f3-1f34-4947-bc0a-c00737719e44/volumes" Apr 17 01:04:40.465205 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Apr 17 01:04:40.465972 systemd-networkd[1484]: cali0c3faf27bfb: Gained IPv6LL Apr 17 01:04:40.839300 containerd[1896]: time="2026-04-17T01:04:40.838967349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:40.841876 containerd[1896]: time="2026-04-17T01:04:40.841849905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 01:04:40.847949 containerd[1896]: time="2026-04-17T01:04:40.847921681Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:40.855701 containerd[1896]: time="2026-04-17T01:04:40.855670548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:40.856128 containerd[1896]: time="2026-04-17T01:04:40.856086567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.289148607s" Apr 17 01:04:40.856170 containerd[1896]: time="2026-04-17T01:04:40.856130256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 01:04:40.857589 containerd[1896]: time="2026-04-17T01:04:40.857422578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 01:04:40.865430 containerd[1896]: time="2026-04-17T01:04:40.865393036Z" level=info msg="CreateContainer within sandbox \"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 01:04:40.886797 containerd[1896]: time="2026-04-17T01:04:40.886292697Z" level=info msg="Container 391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:40.907475 containerd[1896]: time="2026-04-17T01:04:40.907452525Z" level=info msg="CreateContainer within sandbox \"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46\"" Apr 17 01:04:40.907940 containerd[1896]: time="2026-04-17T01:04:40.907922170Z" level=info msg="StartContainer for \"391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46\"" Apr 17 01:04:40.909669 containerd[1896]: time="2026-04-17T01:04:40.909646999Z" level=info msg="connecting to shim 391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46" address="unix:///run/containerd/s/88556552f5f67220c6870144da124867449fe5566b586b76f2005ed08593c8cb" protocol=ttrpc version=3 Apr 17 01:04:40.928214 systemd[1]: Started cri-containerd-391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46.scope - libcontainer container 391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46. Apr 17 01:04:40.960171 containerd[1896]: time="2026-04-17T01:04:40.960146158Z" level=info msg="StartContainer for \"391ab4dd31dc09805cbf50ee86b48429d50f3b197f959384ae500505ae593a46\" returns successfully" Apr 17 01:04:42.132986 containerd[1896]: time="2026-04-17T01:04:42.132939314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:42.136972 containerd[1896]: time="2026-04-17T01:04:42.136942107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 01:04:42.140138 containerd[1896]: time="2026-04-17T01:04:42.140112534Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:42.145098 containerd[1896]: time="2026-04-17T01:04:42.145063265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:42.145564 containerd[1896]: time="2026-04-17T01:04:42.145537269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.288091482s" Apr 17 01:04:42.145604 containerd[1896]: time="2026-04-17T01:04:42.145567302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 01:04:42.146973 containerd[1896]: time="2026-04-17T01:04:42.146949218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 01:04:42.154462 containerd[1896]: time="2026-04-17T01:04:42.154424135Z" level=info msg="CreateContainer within sandbox \"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 01:04:42.182119 containerd[1896]: time="2026-04-17T01:04:42.181783078Z" level=info msg="Container e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:42.203170 containerd[1896]: time="2026-04-17T01:04:42.203145871Z" level=info msg="CreateContainer within sandbox \"3b712ce40cf3bdae5308547d0a644db17e48ddf22c6f09915132b8e54e562ae9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3\"" Apr 17 01:04:42.203600 containerd[1896]: time="2026-04-17T01:04:42.203558858Z" level=info msg="StartContainer for \"e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3\"" Apr 17 01:04:42.204786 containerd[1896]: time="2026-04-17T01:04:42.204761610Z" level=info msg="connecting to shim e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3" address="unix:///run/containerd/s/8f60a19316e583f642b556d88a09b88b3a796cdb1386097001b55ddaad876ca6" protocol=ttrpc version=3 Apr 17 01:04:42.222217 systemd[1]: Started cri-containerd-e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3.scope - libcontainer container e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3. Apr 17 01:04:42.271947 containerd[1896]: time="2026-04-17T01:04:42.271915727Z" level=info msg="StartContainer for \"e05093e188452f495c78d5e1d27bea01cb086bb1a7d9cd7d40fdae516ce483b3\" returns successfully" Apr 17 01:04:42.806258 kubelet[3477]: I0417 01:04:42.806229 3477 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 01:04:42.806717 kubelet[3477]: I0417 01:04:42.806276 3477 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 01:04:42.879479 kubelet[3477]: I0417 01:04:42.879355 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kvcnk" podStartSLOduration=18.738713142 podStartE2EDuration="22.879342326s" podCreationTimestamp="2026-04-17 01:04:20 +0000 UTC" firstStartedPulling="2026-04-17 01:04:38.005750723 +0000 UTC m=+38.359465836" lastFinishedPulling="2026-04-17 01:04:42.146379907 +0000 UTC m=+42.500095020" observedRunningTime="2026-04-17 01:04:42.87912188 +0000 UTC m=+43.232837001" watchObservedRunningTime="2026-04-17 01:04:42.879342326 +0000 UTC m=+43.233057447" Apr 17 01:04:43.547055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244803917.mount: Deactivated successfully. Apr 17 01:04:43.614142 containerd[1896]: time="2026-04-17T01:04:43.614086429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:43.617143 containerd[1896]: time="2026-04-17T01:04:43.617109708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 01:04:43.621414 containerd[1896]: time="2026-04-17T01:04:43.621290010Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:43.625945 containerd[1896]: time="2026-04-17T01:04:43.625919315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:43.626393 containerd[1896]: time="2026-04-17T01:04:43.626232075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.47925836s" Apr 17 01:04:43.626393 containerd[1896]: time="2026-04-17T01:04:43.626260116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 01:04:43.633824 containerd[1896]: time="2026-04-17T01:04:43.633794898Z" level=info msg="CreateContainer within sandbox \"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 01:04:43.671230 containerd[1896]: time="2026-04-17T01:04:43.669039510Z" level=info msg="Container ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:43.694301 containerd[1896]: time="2026-04-17T01:04:43.694265803Z" level=info msg="CreateContainer within sandbox \"bc6ec8ca0f4514cddb9599db8a4257848e78a767db6b65894ee8b80324137bd5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56\"" Apr 17 01:04:43.694926 containerd[1896]: time="2026-04-17T01:04:43.694899452Z" level=info msg="StartContainer for \"ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56\"" Apr 17 01:04:43.696046 containerd[1896]: time="2026-04-17T01:04:43.696022001Z" level=info msg="connecting to shim ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56" address="unix:///run/containerd/s/88556552f5f67220c6870144da124867449fe5566b586b76f2005ed08593c8cb" protocol=ttrpc version=3 Apr 17 01:04:43.718220 systemd[1]: Started cri-containerd-ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56.scope - libcontainer container ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56. Apr 17 01:04:43.755909 containerd[1896]: time="2026-04-17T01:04:43.755877211Z" level=info msg="StartContainer for \"ab4e5bcb77e9b4a28d6d9c9e61cca74901fc6e62110063dedecfccbc089d7a56\" returns successfully" Apr 17 01:04:43.886553 kubelet[3477]: I0417 01:04:43.886067 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6555b4779-h885h" podStartSLOduration=1.8254841069999999 podStartE2EDuration="5.88605304s" podCreationTimestamp="2026-04-17 01:04:38 +0000 UTC" firstStartedPulling="2026-04-17 01:04:39.566595887 +0000 UTC m=+39.920311000" lastFinishedPulling="2026-04-17 01:04:43.62716482 +0000 UTC m=+43.980879933" observedRunningTime="2026-04-17 01:04:43.885324645 +0000 UTC m=+44.239039758" watchObservedRunningTime="2026-04-17 01:04:43.88605304 +0000 UTC m=+44.239768153" Apr 17 01:04:47.722220 containerd[1896]: time="2026-04-17T01:04:47.721795863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-h58vw,Uid:069c8b41-ab4a-4241-8ec9-d15b87117e1a,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:47.722220 containerd[1896]: time="2026-04-17T01:04:47.721945139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-gbgr5,Uid:e05db025-e00a-4034-a01a-7c1725a2c655,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:47.722911 containerd[1896]: time="2026-04-17T01:04:47.722793217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fq27z,Uid:4c1b0f18-8d1b-40d1-89bf-431190b5aa06,Namespace:kube-system,Attempt:0,}" Apr 17 01:04:47.869955 systemd-networkd[1484]: cali4a56969ebe3: Link UP Apr 17 01:04:47.871533 systemd-networkd[1484]: cali4a56969ebe3: Gained carrier Apr 17 01:04:47.886084 containerd[1896]: 2026-04-17 01:04:47.780 [INFO][5143] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0 calico-apiserver-5fbdc68667- calico-system 069c8b41-ab4a-4241-8ec9-d15b87117e1a 833 0 2026-04-17 01:04:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fbdc68667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 calico-apiserver-5fbdc68667-h58vw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4a56969ebe3 [] [] }} ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-" Apr 17 01:04:47.886084 containerd[1896]: 2026-04-17 01:04:47.780 [INFO][5143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886084 containerd[1896]: 2026-04-17 01:04:47.815 [INFO][5180] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" HandleID="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.824 [INFO][5180] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" HandleID="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"calico-apiserver-5fbdc68667-h58vw", "timestamp":"2026-04-17 01:04:47.81528193 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000465080)} Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.824 [INFO][5180] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.824 [INFO][5180] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.825 [INFO][5180] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.830 [INFO][5180] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.835 [INFO][5180] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.842 [INFO][5180] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.846 [INFO][5180] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886225 containerd[1896]: 2026-04-17 01:04:47.848 [INFO][5180] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.848 [INFO][5180] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.850 [INFO][5180] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79 Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.854 [INFO][5180] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5180] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.3/26] block=192.168.113.0/26 handle="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5180] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.3/26] handle="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5180] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:47.886370 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5180] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.3/26] IPv6=[] ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" HandleID="k8s-pod-network.3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886465 containerd[1896]: 2026-04-17 01:04:47.865 [INFO][5143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0", GenerateName:"calico-apiserver-5fbdc68667-", Namespace:"calico-system", SelfLink:"", UID:"069c8b41-ab4a-4241-8ec9-d15b87117e1a", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbdc68667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"calico-apiserver-5fbdc68667-h58vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a56969ebe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:47.886498 containerd[1896]: 2026-04-17 01:04:47.865 [INFO][5143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.3/32] ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886498 containerd[1896]: 2026-04-17 01:04:47.865 [INFO][5143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a56969ebe3 ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886498 containerd[1896]: 2026-04-17 01:04:47.871 [INFO][5143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.886542 containerd[1896]: 2026-04-17 01:04:47.871 [INFO][5143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0", GenerateName:"calico-apiserver-5fbdc68667-", Namespace:"calico-system", SelfLink:"", UID:"069c8b41-ab4a-4241-8ec9-d15b87117e1a", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbdc68667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79", Pod:"calico-apiserver-5fbdc68667-h58vw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a56969ebe3", MAC:"06:7b:8e:8f:fb:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:47.886575 containerd[1896]: 2026-04-17 01:04:47.884 [INFO][5143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-h58vw" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--h58vw-eth0" Apr 17 01:04:47.946606 containerd[1896]: time="2026-04-17T01:04:47.946576445Z" level=info msg="connecting to shim 3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79" address="unix:///run/containerd/s/4ff4ea6cd1d5ee93d7dee4a8af9600f3986fd1e49243b9467e3c64bffff186aa" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:47.967218 systemd[1]: Started cri-containerd-3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79.scope - libcontainer container 3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79. Apr 17 01:04:47.978947 systemd-networkd[1484]: calida0aa3032a8: Link UP Apr 17 01:04:47.979093 systemd-networkd[1484]: calida0aa3032a8: Gained carrier Apr 17 01:04:47.995651 containerd[1896]: 2026-04-17 01:04:47.806 [INFO][5153] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0 calico-apiserver-5fbdc68667- calico-system e05db025-e00a-4034-a01a-7c1725a2c655 836 0 2026-04-17 01:04:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fbdc68667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 calico-apiserver-5fbdc68667-gbgr5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calida0aa3032a8 [] [] }} ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-" Apr 17 01:04:47.995651 containerd[1896]: 2026-04-17 01:04:47.807 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.995651 containerd[1896]: 2026-04-17 01:04:47.839 [INFO][5192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" HandleID="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.847 [INFO][5192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" HandleID="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"calico-apiserver-5fbdc68667-gbgr5", "timestamp":"2026-04-17 01:04:47.839973554 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400036cf20)} Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.848 [INFO][5192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.863 [INFO][5192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.931 [INFO][5192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.941 [INFO][5192] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.946 [INFO][5192] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.949 [INFO][5192] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.995978 containerd[1896]: 2026-04-17 01:04:47.954 [INFO][5192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.954 [INFO][5192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.956 [INFO][5192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.961 [INFO][5192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.971 [INFO][5192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.4/26] block=192.168.113.0/26 handle="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.971 [INFO][5192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.4/26] handle="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.971 [INFO][5192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:47.996529 containerd[1896]: 2026-04-17 01:04:47.971 [INFO][5192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.4/26] IPv6=[] ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" HandleID="k8s-pod-network.167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.996643 containerd[1896]: 2026-04-17 01:04:47.974 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0", GenerateName:"calico-apiserver-5fbdc68667-", Namespace:"calico-system", SelfLink:"", UID:"e05db025-e00a-4034-a01a-7c1725a2c655", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbdc68667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"calico-apiserver-5fbdc68667-gbgr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0aa3032a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:47.996681 containerd[1896]: 2026-04-17 01:04:47.975 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.4/32] ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.996681 containerd[1896]: 2026-04-17 01:04:47.975 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida0aa3032a8 ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.996681 containerd[1896]: 2026-04-17 01:04:47.978 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:47.996724 containerd[1896]: 2026-04-17 01:04:47.979 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0", GenerateName:"calico-apiserver-5fbdc68667-", Namespace:"calico-system", SelfLink:"", UID:"e05db025-e00a-4034-a01a-7c1725a2c655", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbdc68667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba", Pod:"calico-apiserver-5fbdc68667-gbgr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0aa3032a8", MAC:"c2:2e:e8:6e:e0:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:47.996755 containerd[1896]: 2026-04-17 01:04:47.992 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" Namespace="calico-system" Pod="calico-apiserver-5fbdc68667-gbgr5" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--apiserver--5fbdc68667--gbgr5-eth0" Apr 17 01:04:48.033181 containerd[1896]: time="2026-04-17T01:04:48.033089010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-h58vw,Uid:069c8b41-ab4a-4241-8ec9-d15b87117e1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79\"" Apr 17 01:04:48.035004 containerd[1896]: time="2026-04-17T01:04:48.034650515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 01:04:48.058660 containerd[1896]: time="2026-04-17T01:04:48.058608647Z" level=info msg="connecting to shim 167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba" address="unix:///run/containerd/s/de2afcbf83d01329f98087ee9f30319060ca7b3bdce8e3d5211bffa16d48912e" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:48.078764 systemd-networkd[1484]: cali04e9663c5f0: Link UP Apr 17 01:04:48.080527 systemd-networkd[1484]: cali04e9663c5f0: Gained carrier Apr 17 01:04:48.083261 systemd[1]: Started cri-containerd-167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba.scope - libcontainer container 167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba. Apr 17 01:04:48.092830 containerd[1896]: 2026-04-17 01:04:47.801 [INFO][5164] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0 coredns-674b8bbfcf- kube-system 4c1b0f18-8d1b-40d1-89bf-431190b5aa06 832 0 2026-04-17 01:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 coredns-674b8bbfcf-fq27z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04e9663c5f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-" Apr 17 01:04:48.092830 containerd[1896]: 2026-04-17 01:04:47.802 [INFO][5164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.092830 containerd[1896]: 2026-04-17 01:04:47.841 [INFO][5187] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" HandleID="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:47.850 [INFO][5187] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" HandleID="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273900), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"coredns-674b8bbfcf-fq27z", "timestamp":"2026-04-17 01:04:47.841245315 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038ef20)} Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:47.850 [INFO][5187] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:47.972 [INFO][5187] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:47.972 [INFO][5187] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:48.032 [INFO][5187] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:48.045 [INFO][5187] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:48.049 [INFO][5187] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:48.050 [INFO][5187] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.092955 containerd[1896]: 2026-04-17 01:04:48.053 [INFO][5187] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.053 [INFO][5187] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.055 [INFO][5187] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.060 [INFO][5187] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.070 [INFO][5187] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.5/26] block=192.168.113.0/26 handle="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.070 [INFO][5187] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.5/26] handle="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.070 [INFO][5187] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:48.093089 containerd[1896]: 2026-04-17 01:04:48.070 [INFO][5187] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.5/26] IPv6=[] ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" HandleID="k8s-pod-network.a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.074 [INFO][5164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c1b0f18-8d1b-40d1-89bf-431190b5aa06", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"coredns-674b8bbfcf-fq27z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e9663c5f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.074 [INFO][5164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.5/32] ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.074 [INFO][5164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04e9663c5f0 ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.076 [INFO][5164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.076 [INFO][5164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c1b0f18-8d1b-40d1-89bf-431190b5aa06", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc", Pod:"coredns-674b8bbfcf-fq27z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e9663c5f0", MAC:"12:d4:46:a7:c6:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.093717 containerd[1896]: 2026-04-17 01:04:48.089 [INFO][5164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fq27z" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--fq27z-eth0" Apr 17 01:04:48.137032 containerd[1896]: time="2026-04-17T01:04:48.136944029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbdc68667-gbgr5,Uid:e05db025-e00a-4034-a01a-7c1725a2c655,Namespace:calico-system,Attempt:0,} returns sandbox id \"167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba\"" Apr 17 01:04:48.148742 containerd[1896]: time="2026-04-17T01:04:48.148715010Z" level=info msg="connecting to shim a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc" address="unix:///run/containerd/s/25c386f0a1566ddbaea4962ab11529ab1a8a3ec101d25661a6cbff60e1ea2b69" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:48.164321 systemd[1]: Started cri-containerd-a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc.scope - libcontainer container a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc. Apr 17 01:04:48.199240 containerd[1896]: time="2026-04-17T01:04:48.199206910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fq27z,Uid:4c1b0f18-8d1b-40d1-89bf-431190b5aa06,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc\"" Apr 17 01:04:48.207121 containerd[1896]: time="2026-04-17T01:04:48.206962745Z" level=info msg="CreateContainer within sandbox \"a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 01:04:48.229823 containerd[1896]: time="2026-04-17T01:04:48.229756151Z" level=info msg="Container db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:48.245901 containerd[1896]: time="2026-04-17T01:04:48.245836541Z" level=info msg="CreateContainer within sandbox \"a6e7d876285cba314d417e35f6bc4c3f9f8df6fb1bc62937a8d3b24d14e44afc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c\"" Apr 17 01:04:48.246721 containerd[1896]: time="2026-04-17T01:04:48.246586216Z" level=info msg="StartContainer for \"db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c\"" Apr 17 01:04:48.247557 containerd[1896]: time="2026-04-17T01:04:48.247520097Z" level=info msg="connecting to shim db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c" address="unix:///run/containerd/s/25c386f0a1566ddbaea4962ab11529ab1a8a3ec101d25661a6cbff60e1ea2b69" protocol=ttrpc version=3 Apr 17 01:04:48.263205 systemd[1]: Started cri-containerd-db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c.scope - libcontainer container db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c. Apr 17 01:04:48.288521 containerd[1896]: time="2026-04-17T01:04:48.288481027Z" level=info msg="StartContainer for \"db5d513b063e5f558e2cff6f8ad3abb7d84cf779338c0a534f1cac64f5b3b27c\" returns successfully" Apr 17 01:04:48.721503 containerd[1896]: time="2026-04-17T01:04:48.721441836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f949c8fd5-hb8t6,Uid:cfd8febc-5f0f-4b78-ab8b-3414e71a937d,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:48.721999 containerd[1896]: time="2026-04-17T01:04:48.721970642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gkl6l,Uid:3501e470-2bb6-401b-88aa-2d4b73e90ce2,Namespace:calico-system,Attempt:0,}" Apr 17 01:04:48.839667 systemd-networkd[1484]: cali21c1b50465e: Link UP Apr 17 01:04:48.839770 systemd-networkd[1484]: cali21c1b50465e: Gained carrier Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.775 [INFO][5433] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0 calico-kube-controllers-5f949c8fd5- calico-system cfd8febc-5f0f-4b78-ab8b-3414e71a937d 830 0 2026-04-17 01:04:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f949c8fd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 calico-kube-controllers-5f949c8fd5-hb8t6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21c1b50465e [] [] }} ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.775 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.800 [INFO][5461] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" HandleID="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.806 [INFO][5461] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" HandleID="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"calico-kube-controllers-5f949c8fd5-hb8t6", "timestamp":"2026-04-17 01:04:48.800048938 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000230dc0)} Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.806 [INFO][5461] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.806 [INFO][5461] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.806 [INFO][5461] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.808 [INFO][5461] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.812 [INFO][5461] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.815 [INFO][5461] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.817 [INFO][5461] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.818 [INFO][5461] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.818 [INFO][5461] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.819 [INFO][5461] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.824 [INFO][5461] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.832 [INFO][5461] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.6/26] block=192.168.113.0/26 handle="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.832 [INFO][5461] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.6/26] handle="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.832 [INFO][5461] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:48.854675 containerd[1896]: 2026-04-17 01:04:48.832 [INFO][5461] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.6/26] IPv6=[] ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" HandleID="k8s-pod-network.e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Workload="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.835 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0", GenerateName:"calico-kube-controllers-5f949c8fd5-", Namespace:"calico-system", SelfLink:"", UID:"cfd8febc-5f0f-4b78-ab8b-3414e71a937d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f949c8fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"calico-kube-controllers-5f949c8fd5-hb8t6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21c1b50465e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.836 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.6/32] ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.836 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21c1b50465e ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.838 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.838 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0", GenerateName:"calico-kube-controllers-5f949c8fd5-", Namespace:"calico-system", SelfLink:"", UID:"cfd8febc-5f0f-4b78-ab8b-3414e71a937d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f949c8fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f", Pod:"calico-kube-controllers-5f949c8fd5-hb8t6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21c1b50465e", MAC:"a6:50:55:4a:d4:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.856827 containerd[1896]: 2026-04-17 01:04:48.853 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" Namespace="calico-system" Pod="calico-kube-controllers-5f949c8fd5-hb8t6" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-calico--kube--controllers--5f949c8fd5--hb8t6-eth0" Apr 17 01:04:48.911380 containerd[1896]: time="2026-04-17T01:04:48.911132666Z" level=info msg="connecting to shim e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f" address="unix:///run/containerd/s/0699d6f9588bbab10b35fef15db32595c0ae0bb24ade67b7577d910e7df52913" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:48.937360 systemd[1]: Started cri-containerd-e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f.scope - libcontainer container e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f. Apr 17 01:04:48.957875 systemd-networkd[1484]: calibe8da8a81c1: Link UP Apr 17 01:04:48.958035 systemd-networkd[1484]: calibe8da8a81c1: Gained carrier Apr 17 01:04:48.978437 kubelet[3477]: I0417 01:04:48.978350 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fq27z" podStartSLOduration=42.978333349 podStartE2EDuration="42.978333349s" podCreationTimestamp="2026-04-17 01:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:48.902949196 +0000 UTC m=+49.256664309" watchObservedRunningTime="2026-04-17 01:04:48.978333349 +0000 UTC m=+49.332048462" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.775 [INFO][5437] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0 goldmane-5b85766d88- calico-system 3501e470-2bb6-401b-88aa-2d4b73e90ce2 835 0 2026-04-17 01:04:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 goldmane-5b85766d88-gkl6l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibe8da8a81c1 [] [] }} ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.776 [INFO][5437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.800 [INFO][5459] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" HandleID="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Workload="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.810 [INFO][5459] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" HandleID="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Workload="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"goldmane-5b85766d88-gkl6l", "timestamp":"2026-04-17 01:04:48.80092328 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000200dc0)} Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.810 [INFO][5459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.833 [INFO][5459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.833 [INFO][5459] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.909 [INFO][5459] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.921 [INFO][5459] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.924 [INFO][5459] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.927 [INFO][5459] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.930 [INFO][5459] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.930 [INFO][5459] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.932 [INFO][5459] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419 Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.938 [INFO][5459] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.950 [INFO][5459] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.7/26] block=192.168.113.0/26 handle="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.950 [INFO][5459] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.7/26] handle="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.950 [INFO][5459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:48.984822 containerd[1896]: 2026-04-17 01:04:48.950 [INFO][5459] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.7/26] IPv6=[] ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" HandleID="k8s-pod-network.abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Workload="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.953 [INFO][5437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3501e470-2bb6-401b-88aa-2d4b73e90ce2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"goldmane-5b85766d88-gkl6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe8da8a81c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.953 [INFO][5437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.7/32] ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.953 [INFO][5437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe8da8a81c1 ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.961 [INFO][5437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.964 [INFO][5437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"3501e470-2bb6-401b-88aa-2d4b73e90ce2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419", Pod:"goldmane-5b85766d88-gkl6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe8da8a81c1", MAC:"4a:36:d0:13:65:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:48.985175 containerd[1896]: 2026-04-17 01:04:48.978 [INFO][5437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" Namespace="calico-system" Pod="goldmane-5b85766d88-gkl6l" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-goldmane--5b85766d88--gkl6l-eth0" Apr 17 01:04:49.022567 containerd[1896]: time="2026-04-17T01:04:49.022539988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f949c8fd5-hb8t6,Uid:cfd8febc-5f0f-4b78-ab8b-3414e71a937d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f\"" Apr 17 01:04:49.087643 containerd[1896]: time="2026-04-17T01:04:49.087609294Z" level=info msg="connecting to shim abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419" address="unix:///run/containerd/s/9f9aa4fdcd00bd637862911e595943f512fc8df320b914facdce278d5c4f8e6f" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:49.106233 systemd[1]: Started cri-containerd-abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419.scope - libcontainer container abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419. Apr 17 01:04:49.140217 containerd[1896]: time="2026-04-17T01:04:49.140166416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gkl6l,Uid:3501e470-2bb6-401b-88aa-2d4b73e90ce2,Namespace:calico-system,Attempt:0,} returns sandbox id \"abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419\"" Apr 17 01:04:49.233239 systemd-networkd[1484]: calida0aa3032a8: Gained IPv6LL Apr 17 01:04:49.489235 systemd-networkd[1484]: cali04e9663c5f0: Gained IPv6LL Apr 17 01:04:49.618241 systemd-networkd[1484]: cali4a56969ebe3: Gained IPv6LL Apr 17 01:04:50.129331 systemd-networkd[1484]: cali21c1b50465e: Gained IPv6LL Apr 17 01:04:50.176518 containerd[1896]: time="2026-04-17T01:04:50.176467495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:50.179555 containerd[1896]: time="2026-04-17T01:04:50.179418420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 01:04:50.183712 containerd[1896]: time="2026-04-17T01:04:50.183679036Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:50.192174 containerd[1896]: time="2026-04-17T01:04:50.192127785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:50.192786 containerd[1896]: time="2026-04-17T01:04:50.192481475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.157470303s" Apr 17 01:04:50.192786 containerd[1896]: time="2026-04-17T01:04:50.192509651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 01:04:50.194385 containerd[1896]: time="2026-04-17T01:04:50.194223712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 01:04:50.202384 containerd[1896]: time="2026-04-17T01:04:50.202361950Z" level=info msg="CreateContainer within sandbox \"3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 01:04:50.231446 containerd[1896]: time="2026-04-17T01:04:50.231239499Z" level=info msg="Container c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:50.264949 containerd[1896]: time="2026-04-17T01:04:50.264910742Z" level=info msg="CreateContainer within sandbox \"3ad45a6fa61d963ba147b1f331643d4b6d3b1994d8a5161b2fc3f9bd4e116c79\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a\"" Apr 17 01:04:50.265483 containerd[1896]: time="2026-04-17T01:04:50.265459300Z" level=info msg="StartContainer for \"c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a\"" Apr 17 01:04:50.266669 containerd[1896]: time="2026-04-17T01:04:50.266643211Z" level=info msg="connecting to shim c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a" address="unix:///run/containerd/s/4ff4ea6cd1d5ee93d7dee4a8af9600f3986fd1e49243b9467e3c64bffff186aa" protocol=ttrpc version=3 Apr 17 01:04:50.286230 systemd[1]: Started cri-containerd-c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a.scope - libcontainer container c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a. Apr 17 01:04:50.322761 containerd[1896]: time="2026-04-17T01:04:50.322725650Z" level=info msg="StartContainer for \"c67ad44198da082ebc8a701f63dd3453794a05e3f3a86d67d733a8b73335d64a\" returns successfully" Apr 17 01:04:50.576882 containerd[1896]: time="2026-04-17T01:04:50.576824513Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:50.578945 systemd-networkd[1484]: calibe8da8a81c1: Gained IPv6LL Apr 17 01:04:50.581321 containerd[1896]: time="2026-04-17T01:04:50.580294668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 01:04:50.581665 containerd[1896]: time="2026-04-17T01:04:50.581639495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 387.39283ms" Apr 17 01:04:50.581756 containerd[1896]: time="2026-04-17T01:04:50.581741498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 01:04:50.584239 containerd[1896]: time="2026-04-17T01:04:50.584212339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 01:04:50.593174 containerd[1896]: time="2026-04-17T01:04:50.593151741Z" level=info msg="CreateContainer within sandbox \"167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 01:04:50.622183 containerd[1896]: time="2026-04-17T01:04:50.622148078Z" level=info msg="Container 384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:50.647998 containerd[1896]: time="2026-04-17T01:04:50.647957082Z" level=info msg="CreateContainer within sandbox \"167f93f83a25e2eaa7d322fac42ffa033d094c1b266c22a54f01c363138d64ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7\"" Apr 17 01:04:50.649208 containerd[1896]: time="2026-04-17T01:04:50.648626420Z" level=info msg="StartContainer for \"384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7\"" Apr 17 01:04:50.649714 containerd[1896]: time="2026-04-17T01:04:50.649673207Z" level=info msg="connecting to shim 384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7" address="unix:///run/containerd/s/de2afcbf83d01329f98087ee9f30319060ca7b3bdce8e3d5211bffa16d48912e" protocol=ttrpc version=3 Apr 17 01:04:50.669682 systemd[1]: Started cri-containerd-384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7.scope - libcontainer container 384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7. Apr 17 01:04:50.722281 containerd[1896]: time="2026-04-17T01:04:50.722015768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfkv8,Uid:09a42fe5-6c41-4f94-bcf2-6a0d09909e86,Namespace:kube-system,Attempt:0,}" Apr 17 01:04:50.734975 containerd[1896]: time="2026-04-17T01:04:50.734945843Z" level=info msg="StartContainer for \"384a1a22f62a2bf7853530922c835e70d3b1762737a0c63142de0af81da964d7\" returns successfully" Apr 17 01:04:50.861556 systemd-networkd[1484]: cali04862020ec2: Link UP Apr 17 01:04:50.862701 systemd-networkd[1484]: cali04862020ec2: Gained carrier Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.770 [INFO][5720] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0 coredns-674b8bbfcf- kube-system 09a42fe5-6c41-4f94-bcf2-6a0d09909e86 826 0 2026-04-17 01:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.4-n-25f3036c32 coredns-674b8bbfcf-jfkv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04862020ec2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.770 [INFO][5720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.802 [INFO][5735] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" HandleID="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.809 [INFO][5735] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" HandleID="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.4-n-25f3036c32", "pod":"coredns-674b8bbfcf-jfkv8", "timestamp":"2026-04-17 01:04:50.80276871 +0000 UTC"}, Hostname:"ci-4459.2.4-n-25f3036c32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003ed080)} Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.809 [INFO][5735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.809 [INFO][5735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.809 [INFO][5735] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.4-n-25f3036c32' Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.812 [INFO][5735] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.818 [INFO][5735] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.826 [INFO][5735] ipam/ipam.go 526: Trying affinity for 192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.828 [INFO][5735] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.832 [INFO][5735] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.832 [INFO][5735] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.834 [INFO][5735] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0 Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.842 [INFO][5735] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.852 [INFO][5735] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.8/26] block=192.168.113.0/26 handle="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.852 [INFO][5735] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.8/26] handle="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" host="ci-4459.2.4-n-25f3036c32" Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.852 [INFO][5735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 01:04:50.877869 containerd[1896]: 2026-04-17 01:04:50.852 [INFO][5735] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.8/26] IPv6=[] ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" HandleID="k8s-pod-network.789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Workload="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.856 [INFO][5720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"09a42fe5-6c41-4f94-bcf2-6a0d09909e86", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"", Pod:"coredns-674b8bbfcf-jfkv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04862020ec2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.856 [INFO][5720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.8/32] ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.856 [INFO][5720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04862020ec2 ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.863 [INFO][5720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.864 [INFO][5720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"09a42fe5-6c41-4f94-bcf2-6a0d09909e86", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.4-n-25f3036c32", ContainerID:"789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0", Pod:"coredns-674b8bbfcf-jfkv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04862020ec2", MAC:"e2:7e:4b:3b:a7:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 01:04:50.879348 containerd[1896]: 2026-04-17 01:04:50.874 [INFO][5720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfkv8" WorkloadEndpoint="ci--4459.2.4--n--25f3036c32-k8s-coredns--674b8bbfcf--jfkv8-eth0" Apr 17 01:04:50.940855 kubelet[3477]: I0417 01:04:50.940486 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fbdc68667-h58vw" podStartSLOduration=29.78144105 podStartE2EDuration="31.940468969s" podCreationTimestamp="2026-04-17 01:04:19 +0000 UTC" firstStartedPulling="2026-04-17 01:04:48.03438358 +0000 UTC m=+48.388098693" lastFinishedPulling="2026-04-17 01:04:50.193411499 +0000 UTC m=+50.547126612" observedRunningTime="2026-04-17 01:04:50.939566817 +0000 UTC m=+51.293281930" watchObservedRunningTime="2026-04-17 01:04:50.940468969 +0000 UTC m=+51.294184082" Apr 17 01:04:50.940855 kubelet[3477]: I0417 01:04:50.940668 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fbdc68667-gbgr5" podStartSLOduration=29.495322195 podStartE2EDuration="31.94066427s" podCreationTimestamp="2026-04-17 01:04:19 +0000 UTC" firstStartedPulling="2026-04-17 01:04:48.138238863 +0000 UTC m=+48.491953976" lastFinishedPulling="2026-04-17 01:04:50.583580938 +0000 UTC m=+50.937296051" observedRunningTime="2026-04-17 01:04:50.924823767 +0000 UTC m=+51.278538888" watchObservedRunningTime="2026-04-17 01:04:50.94066427 +0000 UTC m=+51.294379383" Apr 17 01:04:50.946323 containerd[1896]: time="2026-04-17T01:04:50.946289826Z" level=info msg="connecting to shim 789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0" address="unix:///run/containerd/s/c4c167316d205295981249683e624d845a631fe656245519f03d8bd8327f977a" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:04:50.983439 systemd[1]: Started cri-containerd-789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0.scope - libcontainer container 789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0. Apr 17 01:04:51.026845 containerd[1896]: time="2026-04-17T01:04:51.026746521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfkv8,Uid:09a42fe5-6c41-4f94-bcf2-6a0d09909e86,Namespace:kube-system,Attempt:0,} returns sandbox id \"789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0\"" Apr 17 01:04:51.036585 containerd[1896]: time="2026-04-17T01:04:51.036534354Z" level=info msg="CreateContainer within sandbox \"789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 01:04:51.067148 containerd[1896]: time="2026-04-17T01:04:51.067052947Z" level=info msg="Container 4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:51.089258 containerd[1896]: time="2026-04-17T01:04:51.089228762Z" level=info msg="CreateContainer within sandbox \"789922ad01fb7811958be3658416bba5cb8ec945e4f1d98d8ca735ac8779efa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673\"" Apr 17 01:04:51.089846 containerd[1896]: time="2026-04-17T01:04:51.089822505Z" level=info msg="StartContainer for \"4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673\"" Apr 17 01:04:51.090634 containerd[1896]: time="2026-04-17T01:04:51.090610526Z" level=info msg="connecting to shim 4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673" address="unix:///run/containerd/s/c4c167316d205295981249683e624d845a631fe656245519f03d8bd8327f977a" protocol=ttrpc version=3 Apr 17 01:04:51.113296 systemd[1]: Started cri-containerd-4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673.scope - libcontainer container 4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673. Apr 17 01:04:51.152530 containerd[1896]: time="2026-04-17T01:04:51.152503935Z" level=info msg="StartContainer for \"4f90106d74a72cf72f96ed262e04d9afc4efc3849950ac649fcf8c358d6e5673\" returns successfully" Apr 17 01:04:51.957078 kubelet[3477]: I0417 01:04:51.957023 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jfkv8" podStartSLOduration=45.957009414 podStartE2EDuration="45.957009414s" podCreationTimestamp="2026-04-17 01:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:04:51.939630125 +0000 UTC m=+52.293345262" watchObservedRunningTime="2026-04-17 01:04:51.957009414 +0000 UTC m=+52.310724527" Apr 17 01:04:52.049538 systemd-networkd[1484]: cali04862020ec2: Gained IPv6LL Apr 17 01:04:52.752892 containerd[1896]: time="2026-04-17T01:04:52.752842889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:52.758630 containerd[1896]: time="2026-04-17T01:04:52.758600656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 01:04:52.764794 containerd[1896]: time="2026-04-17T01:04:52.764765834Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:52.771332 containerd[1896]: time="2026-04-17T01:04:52.771302702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:52.771740 containerd[1896]: time="2026-04-17T01:04:52.771714008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.1874711s" Apr 17 01:04:52.771770 containerd[1896]: time="2026-04-17T01:04:52.771741329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 01:04:52.772841 containerd[1896]: time="2026-04-17T01:04:52.772814413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 01:04:52.790629 containerd[1896]: time="2026-04-17T01:04:52.790604720Z" level=info msg="CreateContainer within sandbox \"e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 01:04:52.812712 containerd[1896]: time="2026-04-17T01:04:52.812218416Z" level=info msg="Container f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:52.836348 containerd[1896]: time="2026-04-17T01:04:52.836327760Z" level=info msg="CreateContainer within sandbox \"e33864576fc87f97c2c50231ae460d7157cd9e9c0b1ea5858521926943de359f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547\"" Apr 17 01:04:52.836830 containerd[1896]: time="2026-04-17T01:04:52.836799517Z" level=info msg="StartContainer for \"f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547\"" Apr 17 01:04:52.838445 containerd[1896]: time="2026-04-17T01:04:52.838426664Z" level=info msg="connecting to shim f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547" address="unix:///run/containerd/s/0699d6f9588bbab10b35fef15db32595c0ae0bb24ade67b7577d910e7df52913" protocol=ttrpc version=3 Apr 17 01:04:52.857206 systemd[1]: Started cri-containerd-f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547.scope - libcontainer container f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547. Apr 17 01:04:52.894745 containerd[1896]: time="2026-04-17T01:04:52.894691837Z" level=info msg="StartContainer for \"f7217d06eb8827701983f95d57840e8c0461858b2c8246128ed5ef6fc189f547\" returns successfully" Apr 17 01:04:52.931805 kubelet[3477]: I0417 01:04:52.931746 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f949c8fd5-hb8t6" podStartSLOduration=29.184033883 podStartE2EDuration="32.931731641s" podCreationTimestamp="2026-04-17 01:04:20 +0000 UTC" firstStartedPulling="2026-04-17 01:04:49.024797263 +0000 UTC m=+49.378512376" lastFinishedPulling="2026-04-17 01:04:52.772495021 +0000 UTC m=+53.126210134" observedRunningTime="2026-04-17 01:04:52.931534236 +0000 UTC m=+53.285249349" watchObservedRunningTime="2026-04-17 01:04:52.931731641 +0000 UTC m=+53.285446754" Apr 17 01:04:54.811397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948089501.mount: Deactivated successfully. Apr 17 01:04:55.244999 containerd[1896]: time="2026-04-17T01:04:55.244953358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:55.248734 containerd[1896]: time="2026-04-17T01:04:55.248706904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 01:04:55.251891 containerd[1896]: time="2026-04-17T01:04:55.251855587Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:55.257458 containerd[1896]: time="2026-04-17T01:04:55.257420253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:04:55.257890 containerd[1896]: time="2026-04-17T01:04:55.257863416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.48501957s" Apr 17 01:04:55.257922 containerd[1896]: time="2026-04-17T01:04:55.257891873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 01:04:55.269376 containerd[1896]: time="2026-04-17T01:04:55.269350990Z" level=info msg="CreateContainer within sandbox \"abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 01:04:55.300365 containerd[1896]: time="2026-04-17T01:04:55.300291890Z" level=info msg="Container 341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:04:55.320040 containerd[1896]: time="2026-04-17T01:04:55.319969079Z" level=info msg="CreateContainer within sandbox \"abdb98424014478e6d6a51fbbd866243f6d6c24a37103a18594b0a3ea113a419\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4\"" Apr 17 01:04:55.320445 containerd[1896]: time="2026-04-17T01:04:55.320361137Z" level=info msg="StartContainer for \"341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4\"" Apr 17 01:04:55.322796 containerd[1896]: time="2026-04-17T01:04:55.322769880Z" level=info msg="connecting to shim 341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4" address="unix:///run/containerd/s/9f9aa4fdcd00bd637862911e595943f512fc8df320b914facdce278d5c4f8e6f" protocol=ttrpc version=3 Apr 17 01:04:55.360220 systemd[1]: Started cri-containerd-341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4.scope - libcontainer container 341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4. Apr 17 01:04:55.401067 containerd[1896]: time="2026-04-17T01:04:55.401026679Z" level=info msg="StartContainer for \"341e29244d2867792289878a556d6e8d4cd70e4ba66053713d58736d9363bbb4\" returns successfully" Apr 17 01:04:55.946086 kubelet[3477]: I0417 01:04:55.946031 3477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-gkl6l" podStartSLOduration=30.828910432 podStartE2EDuration="36.946017585s" podCreationTimestamp="2026-04-17 01:04:19 +0000 UTC" firstStartedPulling="2026-04-17 01:04:49.141459586 +0000 UTC m=+49.495174699" lastFinishedPulling="2026-04-17 01:04:55.258566739 +0000 UTC m=+55.612281852" observedRunningTime="2026-04-17 01:04:55.94465599 +0000 UTC m=+56.298371111" watchObservedRunningTime="2026-04-17 01:04:55.946017585 +0000 UTC m=+56.299732698" Apr 17 01:05:50.592324 systemd[1]: Started sshd@7-10.0.0.24:22-20.229.252.112:43724.service - OpenSSH per-connection server daemon (20.229.252.112:43724). Apr 17 01:05:51.376415 sshd[6222]: Accepted publickey for core from 20.229.252.112 port 43724 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:05:51.378194 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:05:51.382579 systemd-logind[1875]: New session 10 of user core. Apr 17 01:05:51.390208 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 01:05:51.873406 sshd[6225]: Connection closed by 20.229.252.112 port 43724 Apr 17 01:05:51.873951 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Apr 17 01:05:51.877077 systemd[1]: sshd@7-10.0.0.24:22-20.229.252.112:43724.service: Deactivated successfully. Apr 17 01:05:51.878636 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 01:05:51.879269 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Apr 17 01:05:51.880704 systemd-logind[1875]: Removed session 10. Apr 17 01:05:57.032184 systemd[1]: Started sshd@8-10.0.0.24:22-20.229.252.112:37382.service - OpenSSH per-connection server daemon (20.229.252.112:37382). Apr 17 01:05:57.801218 sshd[6286]: Accepted publickey for core from 20.229.252.112 port 37382 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:05:57.804415 sshd-session[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:05:57.808220 systemd-logind[1875]: New session 11 of user core. Apr 17 01:05:57.816245 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 01:05:58.289752 sshd[6289]: Connection closed by 20.229.252.112 port 37382 Apr 17 01:05:58.290327 sshd-session[6286]: pam_unix(sshd:session): session closed for user core Apr 17 01:05:58.293593 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Apr 17 01:05:58.293851 systemd[1]: sshd@8-10.0.0.24:22-20.229.252.112:37382.service: Deactivated successfully. Apr 17 01:05:58.296957 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 01:05:58.298924 systemd-logind[1875]: Removed session 11. Apr 17 01:06:03.447719 systemd[1]: Started sshd@9-10.0.0.24:22-20.229.252.112:37388.service - OpenSSH per-connection server daemon (20.229.252.112:37388). Apr 17 01:06:04.215135 sshd[6314]: Accepted publickey for core from 20.229.252.112 port 37388 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:04.215979 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:04.219616 systemd-logind[1875]: New session 12 of user core. Apr 17 01:06:04.229328 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 01:06:04.701784 sshd[6317]: Connection closed by 20.229.252.112 port 37388 Apr 17 01:06:04.702324 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:04.705738 systemd[1]: sshd@9-10.0.0.24:22-20.229.252.112:37388.service: Deactivated successfully. Apr 17 01:06:04.707479 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 01:06:04.709602 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Apr 17 01:06:04.710691 systemd-logind[1875]: Removed session 12. Apr 17 01:06:09.857781 systemd[1]: Started sshd@10-10.0.0.24:22-20.229.252.112:60398.service - OpenSSH per-connection server daemon (20.229.252.112:60398). Apr 17 01:06:10.627624 sshd[6379]: Accepted publickey for core from 20.229.252.112 port 60398 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:10.628700 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:10.632462 systemd-logind[1875]: New session 13 of user core. Apr 17 01:06:10.639227 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 01:06:11.116184 sshd[6382]: Connection closed by 20.229.252.112 port 60398 Apr 17 01:06:11.116113 sshd-session[6379]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:11.119903 systemd[1]: sshd@10-10.0.0.24:22-20.229.252.112:60398.service: Deactivated successfully. Apr 17 01:06:11.122164 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 01:06:11.123653 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Apr 17 01:06:11.125640 systemd-logind[1875]: Removed session 13. Apr 17 01:06:11.270378 systemd[1]: Started sshd@11-10.0.0.24:22-20.229.252.112:60400.service - OpenSSH per-connection server daemon (20.229.252.112:60400). Apr 17 01:06:12.037248 sshd[6416]: Accepted publickey for core from 20.229.252.112 port 60400 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:12.038279 sshd-session[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:12.041887 systemd-logind[1875]: New session 14 of user core. Apr 17 01:06:12.049239 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 01:06:12.560448 sshd[6419]: Connection closed by 20.229.252.112 port 60400 Apr 17 01:06:12.560766 sshd-session[6416]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:12.564492 systemd[1]: sshd@11-10.0.0.24:22-20.229.252.112:60400.service: Deactivated successfully. Apr 17 01:06:12.567309 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 01:06:12.567903 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Apr 17 01:06:12.568964 systemd-logind[1875]: Removed session 14. Apr 17 01:06:12.717561 systemd[1]: Started sshd@12-10.0.0.24:22-20.229.252.112:60416.service - OpenSSH per-connection server daemon (20.229.252.112:60416). Apr 17 01:06:13.487148 sshd[6429]: Accepted publickey for core from 20.229.252.112 port 60416 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:13.488076 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:13.491835 systemd-logind[1875]: New session 15 of user core. Apr 17 01:06:13.495219 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 01:06:13.977117 sshd[6436]: Connection closed by 20.229.252.112 port 60416 Apr 17 01:06:13.977606 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:13.981022 systemd[1]: sshd@12-10.0.0.24:22-20.229.252.112:60416.service: Deactivated successfully. Apr 17 01:06:13.981036 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Apr 17 01:06:13.982863 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 01:06:13.984394 systemd-logind[1875]: Removed session 15. Apr 17 01:06:19.136858 systemd[1]: Started sshd@13-10.0.0.24:22-20.229.252.112:38026.service - OpenSSH per-connection server daemon (20.229.252.112:38026). Apr 17 01:06:19.909231 sshd[6502]: Accepted publickey for core from 20.229.252.112 port 38026 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:19.910335 sshd-session[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:19.913875 systemd-logind[1875]: New session 16 of user core. Apr 17 01:06:19.922412 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 01:06:20.400989 sshd[6505]: Connection closed by 20.229.252.112 port 38026 Apr 17 01:06:20.401552 sshd-session[6502]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:20.405345 systemd[1]: sshd@13-10.0.0.24:22-20.229.252.112:38026.service: Deactivated successfully. Apr 17 01:06:20.407147 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 01:06:20.407930 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Apr 17 01:06:20.409360 systemd-logind[1875]: Removed session 16. Apr 17 01:06:20.552417 systemd[1]: Started sshd@14-10.0.0.24:22-20.229.252.112:38034.service - OpenSSH per-connection server daemon (20.229.252.112:38034). Apr 17 01:06:21.293617 sshd[6516]: Accepted publickey for core from 20.229.252.112 port 38034 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:21.294694 sshd-session[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:21.298279 systemd-logind[1875]: New session 17 of user core. Apr 17 01:06:21.302227 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 01:06:21.910692 sshd[6519]: Connection closed by 20.229.252.112 port 38034 Apr 17 01:06:21.967394 sshd-session[6516]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:21.971344 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Apr 17 01:06:21.971574 systemd[1]: sshd@14-10.0.0.24:22-20.229.252.112:38034.service: Deactivated successfully. Apr 17 01:06:21.973964 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 01:06:21.975390 systemd-logind[1875]: Removed session 17. Apr 17 01:06:22.070607 systemd[1]: Started sshd@15-10.0.0.24:22-20.229.252.112:38044.service - OpenSSH per-connection server daemon (20.229.252.112:38044). Apr 17 01:06:22.845161 sshd[6529]: Accepted publickey for core from 20.229.252.112 port 38044 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:22.846248 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:22.849929 systemd-logind[1875]: New session 18 of user core. Apr 17 01:06:22.857214 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 01:06:23.701044 sshd[6532]: Connection closed by 20.229.252.112 port 38044 Apr 17 01:06:23.701248 sshd-session[6529]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:23.704954 systemd[1]: sshd@15-10.0.0.24:22-20.229.252.112:38044.service: Deactivated successfully. Apr 17 01:06:23.707478 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 01:06:23.708550 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Apr 17 01:06:23.710454 systemd-logind[1875]: Removed session 18. Apr 17 01:06:23.853960 systemd[1]: Started sshd@16-10.0.0.24:22-20.229.252.112:38052.service - OpenSSH per-connection server daemon (20.229.252.112:38052). Apr 17 01:06:24.611475 sshd[6582]: Accepted publickey for core from 20.229.252.112 port 38052 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:24.613571 sshd-session[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:24.618681 systemd-logind[1875]: New session 19 of user core. Apr 17 01:06:24.622207 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 01:06:25.180902 sshd[6585]: Connection closed by 20.229.252.112 port 38052 Apr 17 01:06:25.180820 sshd-session[6582]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:25.185027 systemd[1]: sshd@16-10.0.0.24:22-20.229.252.112:38052.service: Deactivated successfully. Apr 17 01:06:25.186860 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 01:06:25.187546 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Apr 17 01:06:25.190951 systemd-logind[1875]: Removed session 19. Apr 17 01:06:25.343140 systemd[1]: Started sshd@17-10.0.0.24:22-20.229.252.112:35462.service - OpenSSH per-connection server daemon (20.229.252.112:35462). Apr 17 01:06:26.111972 sshd[6595]: Accepted publickey for core from 20.229.252.112 port 35462 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:26.113128 sshd-session[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:26.117358 systemd-logind[1875]: New session 20 of user core. Apr 17 01:06:26.122261 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 01:06:26.597114 sshd[6598]: Connection closed by 20.229.252.112 port 35462 Apr 17 01:06:26.597005 sshd-session[6595]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:26.600750 systemd[1]: sshd@17-10.0.0.24:22-20.229.252.112:35462.service: Deactivated successfully. Apr 17 01:06:26.602857 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 01:06:26.605469 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Apr 17 01:06:26.606307 systemd-logind[1875]: Removed session 20. Apr 17 01:06:31.754378 systemd[1]: Started sshd@18-10.0.0.24:22-20.229.252.112:35474.service - OpenSSH per-connection server daemon (20.229.252.112:35474). Apr 17 01:06:32.523130 sshd[6634]: Accepted publickey for core from 20.229.252.112 port 35474 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:32.524080 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:32.527936 systemd-logind[1875]: New session 21 of user core. Apr 17 01:06:32.535216 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 01:06:33.005913 sshd[6637]: Connection closed by 20.229.252.112 port 35474 Apr 17 01:06:33.006292 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:33.009597 systemd[1]: sshd@18-10.0.0.24:22-20.229.252.112:35474.service: Deactivated successfully. Apr 17 01:06:33.011447 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 01:06:33.012709 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Apr 17 01:06:33.013793 systemd-logind[1875]: Removed session 21. Apr 17 01:06:38.167630 systemd[1]: Started sshd@19-10.0.0.24:22-20.229.252.112:55646.service - OpenSSH per-connection server daemon (20.229.252.112:55646). Apr 17 01:06:38.932809 sshd[6651]: Accepted publickey for core from 20.229.252.112 port 55646 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:38.951041 sshd-session[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:38.954870 systemd-logind[1875]: New session 22 of user core. Apr 17 01:06:38.961418 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 01:06:39.416316 sshd[6678]: Connection closed by 20.229.252.112 port 55646 Apr 17 01:06:39.417989 sshd-session[6651]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:39.420917 systemd[1]: sshd@19-10.0.0.24:22-20.229.252.112:55646.service: Deactivated successfully. Apr 17 01:06:39.423193 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 01:06:39.424418 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Apr 17 01:06:39.426218 systemd-logind[1875]: Removed session 22. Apr 17 01:06:44.565245 systemd[1]: Started sshd@20-10.0.0.24:22-20.229.252.112:55660.service - OpenSSH per-connection server daemon (20.229.252.112:55660). Apr 17 01:06:45.307126 sshd[6690]: Accepted publickey for core from 20.229.252.112 port 55660 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:45.307860 sshd-session[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:45.312223 systemd-logind[1875]: New session 23 of user core. Apr 17 01:06:45.319215 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 01:06:45.774191 sshd[6693]: Connection closed by 20.229.252.112 port 55660 Apr 17 01:06:45.775069 sshd-session[6690]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:45.778410 systemd-logind[1875]: Session 23 logged out. Waiting for processes to exit. Apr 17 01:06:45.779362 systemd[1]: sshd@20-10.0.0.24:22-20.229.252.112:55660.service: Deactivated successfully. Apr 17 01:06:45.782439 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 01:06:45.785469 systemd-logind[1875]: Removed session 23. Apr 17 01:06:50.933775 systemd[1]: Started sshd@21-10.0.0.24:22-20.229.252.112:51738.service - OpenSSH per-connection server daemon (20.229.252.112:51738). Apr 17 01:06:51.712605 sshd[6705]: Accepted publickey for core from 20.229.252.112 port 51738 ssh2: RSA SHA256:c19BMoql8CuhJvMKYSt9rrISjjXYHZeDDx7/0rvnGNg Apr 17 01:06:51.713688 sshd-session[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:06:51.717600 systemd-logind[1875]: New session 24 of user core. Apr 17 01:06:51.721222 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 01:06:52.211579 sshd[6708]: Connection closed by 20.229.252.112 port 51738 Apr 17 01:06:52.211903 sshd-session[6705]: pam_unix(sshd:session): session closed for user core Apr 17 01:06:52.214666 systemd-logind[1875]: Session 24 logged out. Waiting for processes to exit. Apr 17 01:06:52.214977 systemd[1]: sshd@21-10.0.0.24:22-20.229.252.112:51738.service: Deactivated successfully. Apr 17 01:06:52.216759 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 01:06:52.219077 systemd-logind[1875]: Removed session 24.