Apr 21 10:06:14.199908 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 21 10:06:14.199929 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 10:06:14.199936 kernel: KASLR enabled Apr 21 10:06:14.199942 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 21 10:06:14.199950 kernel: printk: bootconsole [pl11] enabled Apr 21 10:06:14.199955 kernel: efi: EFI v2.7 by EDK II Apr 21 10:06:14.199962 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 21 10:06:14.199968 kernel: random: crng init done Apr 21 10:06:14.199974 kernel: ACPI: Early table checksum verification disabled Apr 21 10:06:14.199980 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 21 10:06:14.199986 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.199992 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200000 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 10:06:14.200006 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200014 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200020 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200027 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200034 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200041 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200047 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 21 10:06:14.200054 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200060 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 21 10:06:14.200066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 21 10:06:14.200073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 21 10:06:14.200079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 21 10:06:14.200085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 21 10:06:14.200092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 21 10:06:14.200098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 21 10:06:14.200106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 21 10:06:14.200112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 21 10:06:14.200118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 21 10:06:14.200125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 21 10:06:14.200131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 21 10:06:14.200137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 21 10:06:14.200144 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 21 10:06:14.200150 kernel: Zone ranges: Apr 21 10:06:14.200156 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 21 10:06:14.200162 kernel: DMA32 empty Apr 21 10:06:14.200168 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 21 10:06:14.200175 kernel: Movable zone start for each node Apr 21 10:06:14.200185 kernel: Early memory node ranges Apr 21 10:06:14.200192 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 21 10:06:14.200199 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 21 10:06:14.200206 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 21 10:06:14.200212 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 21 10:06:14.200220 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 21 10:06:14.200227 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 21 10:06:14.200234 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 21 10:06:14.200241 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 21 10:06:14.200247 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 21 10:06:14.200254 kernel: psci: probing for conduit method from ACPI. Apr 21 10:06:14.200261 kernel: psci: PSCIv1.1 detected in firmware. Apr 21 10:06:14.200267 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 10:06:14.200274 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 21 10:06:14.200280 kernel: psci: SMC Calling Convention v1.4 Apr 21 10:06:14.200287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 21 10:06:14.200294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 21 10:06:14.200302 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 10:06:14.200309 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 10:06:14.202331 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 10:06:14.202348 kernel: Detected PIPT I-cache on CPU0 Apr 21 10:06:14.202356 kernel: CPU features: detected: GIC system register CPU interface Apr 21 10:06:14.202363 kernel: CPU features: detected: Hardware dirty bit management Apr 21 10:06:14.202370 kernel: CPU features: detected: Spectre-BHB Apr 21 10:06:14.202377 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 21 10:06:14.202384 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 21 10:06:14.202391 kernel: CPU features: detected: ARM erratum 1418040 Apr 21 10:06:14.202398 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 21 10:06:14.202408 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 21 10:06:14.202415 kernel: alternatives: applying boot alternatives Apr 21 10:06:14.202424 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:06:14.202431 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:06:14.202438 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:06:14.202445 kernel: Fallback order for Node 0: 0 Apr 21 10:06:14.202451 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 21 10:06:14.202458 kernel: Policy zone: Normal Apr 21 10:06:14.202465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:06:14.202472 kernel: software IO TLB: area num 2. Apr 21 10:06:14.202479 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 21 10:06:14.202487 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 21 10:06:14.202494 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:06:14.202501 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:06:14.202520 kernel: rcu: RCU event tracing is enabled. Apr 21 10:06:14.202527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:06:14.202534 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:06:14.202541 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:06:14.202548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:06:14.202555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:06:14.202561 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 10:06:14.202568 kernel: GICv3: 960 SPIs implemented Apr 21 10:06:14.202576 kernel: GICv3: 0 Extended SPIs implemented Apr 21 10:06:14.202583 kernel: Root IRQ handler: gic_handle_irq Apr 21 10:06:14.202590 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 21 10:06:14.202597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 21 10:06:14.202604 kernel: ITS: No ITS available, not enabling LPIs Apr 21 10:06:14.202611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:06:14.202618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 10:06:14.202625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 21 10:06:14.202632 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 21 10:06:14.202639 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 21 10:06:14.202646 kernel: Console: colour dummy device 80x25 Apr 21 10:06:14.202655 kernel: printk: console [tty1] enabled Apr 21 10:06:14.202662 kernel: ACPI: Core revision 20230628 Apr 21 10:06:14.202669 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 21 10:06:14.202676 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:06:14.202683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:06:14.202690 kernel: landlock: Up and running. Apr 21 10:06:14.202697 kernel: SELinux: Initializing. Apr 21 10:06:14.202704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.202711 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.202719 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:06:14.202726 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:06:14.202734 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 21 10:06:14.202741 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 21 10:06:14.202747 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 10:06:14.202754 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:06:14.202761 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:06:14.202768 kernel: Remapping and enabling EFI services. Apr 21 10:06:14.202781 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:06:14.202789 kernel: Detected PIPT I-cache on CPU1 Apr 21 10:06:14.202796 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 21 10:06:14.202803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 10:06:14.202812 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 21 10:06:14.202819 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:06:14.202827 kernel: SMP: Total of 2 processors activated. Apr 21 10:06:14.202834 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 10:06:14.202841 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 21 10:06:14.202850 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 21 10:06:14.202858 kernel: CPU features: detected: CRC32 instructions Apr 21 10:06:14.202865 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 21 10:06:14.202873 kernel: CPU features: detected: LSE atomic instructions Apr 21 10:06:14.202880 kernel: CPU features: detected: Privileged Access Never Apr 21 10:06:14.202887 kernel: CPU: All CPU(s) started at EL1 Apr 21 10:06:14.202894 kernel: alternatives: applying system-wide alternatives Apr 21 10:06:14.202902 kernel: devtmpfs: initialized Apr 21 10:06:14.202909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:06:14.202918 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:06:14.202925 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:06:14.202933 kernel: SMBIOS 3.1.0 present. Apr 21 10:06:14.202940 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 21 10:06:14.202947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:06:14.202955 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 10:06:14.202962 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 10:06:14.202969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 10:06:14.202977 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:06:14.202986 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 21 10:06:14.202993 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:06:14.203000 kernel: cpuidle: using governor menu Apr 21 10:06:14.203008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 10:06:14.203015 kernel: ASID allocator initialised with 32768 entries Apr 21 10:06:14.203022 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:06:14.203030 kernel: Serial: AMBA PL011 UART driver Apr 21 10:06:14.203037 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 21 10:06:14.203045 kernel: Modules: 0 pages in range for non-PLT usage Apr 21 10:06:14.203053 kernel: Modules: 509008 pages in range for PLT usage Apr 21 10:06:14.203061 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:06:14.203068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:06:14.203076 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 10:06:14.203083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 10:06:14.203090 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:06:14.203097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:06:14.203105 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 10:06:14.203112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 10:06:14.203121 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:06:14.203128 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:06:14.203135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:06:14.203143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:06:14.203150 kernel: ACPI: Interpreter enabled Apr 21 10:06:14.203157 kernel: ACPI: Using GIC for interrupt routing Apr 21 10:06:14.203165 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 21 10:06:14.203172 kernel: printk: console [ttyAMA0] enabled Apr 21 10:06:14.203179 kernel: printk: bootconsole [pl11] disabled Apr 21 10:06:14.203188 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 21 10:06:14.203195 kernel: iommu: Default domain type: Translated Apr 21 10:06:14.203202 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 10:06:14.203210 kernel: efivars: Registered efivars operations Apr 21 10:06:14.203217 kernel: vgaarb: loaded Apr 21 10:06:14.203224 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 10:06:14.203232 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:06:14.203239 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:06:14.203246 kernel: pnp: PnP ACPI init Apr 21 10:06:14.203255 kernel: pnp: PnP ACPI: found 0 devices Apr 21 10:06:14.203262 kernel: NET: Registered PF_INET protocol family Apr 21 10:06:14.203269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:06:14.203277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:06:14.203284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:06:14.203291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:06:14.203299 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:06:14.203306 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:06:14.203313 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.205368 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.205380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:06:14.205389 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:06:14.205396 kernel: kvm [1]: HYP mode not available Apr 21 10:06:14.205404 kernel: Initialise system trusted keyrings Apr 21 10:06:14.205412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:06:14.205419 kernel: Key type asymmetric registered Apr 21 10:06:14.205427 kernel: Asymmetric key parser 'x509' registered Apr 21 10:06:14.205434 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 10:06:14.205444 kernel: io scheduler mq-deadline registered Apr 21 10:06:14.205451 kernel: io scheduler kyber registered Apr 21 10:06:14.205458 kernel: io scheduler bfq registered Apr 21 10:06:14.205466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:06:14.205473 kernel: thunder_xcv, ver 1.0 Apr 21 10:06:14.205480 kernel: thunder_bgx, ver 1.0 Apr 21 10:06:14.205488 kernel: nicpf, ver 1.0 Apr 21 10:06:14.205495 kernel: nicvf, ver 1.0 Apr 21 10:06:14.205647 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 10:06:14.205723 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T10:06:13 UTC (1776765973) Apr 21 10:06:14.205733 kernel: efifb: probing for efifb Apr 21 10:06:14.205741 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 10:06:14.205748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 10:06:14.205755 kernel: efifb: scrolling: redraw Apr 21 10:06:14.205763 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:06:14.205770 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 10:06:14.205778 kernel: fb0: EFI VGA frame buffer device Apr 21 10:06:14.205788 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 21 10:06:14.205795 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:06:14.205803 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 21 10:06:14.205810 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 10:06:14.205817 kernel: watchdog: Hard watchdog permanently disabled Apr 21 10:06:14.205825 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:06:14.205832 kernel: Segment Routing with IPv6 Apr 21 10:06:14.205839 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:06:14.205847 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:06:14.205856 kernel: Key type dns_resolver registered Apr 21 10:06:14.205863 kernel: registered taskstats version 1 Apr 21 10:06:14.205870 kernel: Loading compiled-in X.509 certificates Apr 21 10:06:14.205878 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 10:06:14.205885 kernel: Key type .fscrypt registered Apr 21 10:06:14.205893 kernel: Key type fscrypt-provisioning registered Apr 21 10:06:14.205900 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:06:14.205907 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:06:14.205915 kernel: ima: No architecture policies found Apr 21 10:06:14.205924 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 10:06:14.205931 kernel: clk: Disabling unused clocks Apr 21 10:06:14.205939 kernel: Freeing unused kernel memory: 39424K Apr 21 10:06:14.205946 kernel: Run /init as init process Apr 21 10:06:14.205953 kernel: with arguments: Apr 21 10:06:14.205961 kernel: /init Apr 21 10:06:14.205968 kernel: with environment: Apr 21 10:06:14.205975 kernel: HOME=/ Apr 21 10:06:14.205982 kernel: TERM=linux Apr 21 10:06:14.205992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:06:14.206003 systemd[1]: Detected virtualization microsoft. Apr 21 10:06:14.206012 systemd[1]: Detected architecture arm64. Apr 21 10:06:14.206020 systemd[1]: Running in initrd. Apr 21 10:06:14.206027 systemd[1]: No hostname configured, using default hostname. Apr 21 10:06:14.206035 systemd[1]: Hostname set to . Apr 21 10:06:14.206043 systemd[1]: Initializing machine ID from random generator. Apr 21 10:06:14.206053 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:06:14.206061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:06:14.206069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:06:14.206078 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:06:14.206087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:06:14.206095 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:06:14.206103 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:06:14.206112 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:06:14.206122 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:06:14.206130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:06:14.206138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:06:14.206147 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:06:14.206154 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:06:14.206162 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:06:14.206170 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:06:14.206178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:06:14.206188 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:06:14.206196 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:06:14.206204 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:06:14.206212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:06:14.206220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:06:14.206228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:06:14.206236 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:06:14.206244 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:06:14.206254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:06:14.206262 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:06:14.206270 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:06:14.206278 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:06:14.206286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:06:14.206311 systemd-journald[217]: Collecting audit messages is disabled. Apr 21 10:06:14.206347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:14.206356 systemd-journald[217]: Journal started Apr 21 10:06:14.206375 systemd-journald[217]: Runtime Journal (/run/log/journal/5e090497f3114640aee79e408ee8f4cc) is 8.0M, max 78.5M, 70.5M free. Apr 21 10:06:14.207838 systemd-modules-load[218]: Inserted module 'overlay' Apr 21 10:06:14.221267 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:06:14.224640 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:06:14.234092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:06:14.261145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:06:14.261169 kernel: Bridge firewalling registered Apr 21 10:06:14.258520 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:06:14.261132 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 21 10:06:14.268426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:06:14.277035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:14.295666 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:14.310494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:06:14.318589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:06:14.347468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:06:14.353399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:14.364865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:06:14.370190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:06:14.384344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:06:14.406601 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:06:14.418752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:06:14.432456 dracut-cmdline[251]: dracut-dracut-053 Apr 21 10:06:14.443590 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:06:14.435545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:06:14.483351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:06:14.504690 systemd-resolved[254]: Positive Trust Anchors: Apr 21 10:06:14.504704 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:06:14.504737 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:06:14.506947 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 21 10:06:14.507725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:06:14.551368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:06:14.574328 kernel: SCSI subsystem initialized Apr 21 10:06:14.581327 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:06:14.591363 kernel: iscsi: registered transport (tcp) Apr 21 10:06:14.607711 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:06:14.607755 kernel: QLogic iSCSI HBA Driver Apr 21 10:06:14.646528 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:06:14.661432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:06:14.691656 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:06:14.691707 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:06:14.697516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:06:14.745343 kernel: raid6: neonx8 gen() 15801 MB/s Apr 21 10:06:14.764333 kernel: raid6: neonx4 gen() 15691 MB/s Apr 21 10:06:14.783339 kernel: raid6: neonx2 gen() 13234 MB/s Apr 21 10:06:14.803334 kernel: raid6: neonx1 gen() 10555 MB/s Apr 21 10:06:14.822329 kernel: raid6: int64x8 gen() 6975 MB/s Apr 21 10:06:14.841349 kernel: raid6: int64x4 gen() 7372 MB/s Apr 21 10:06:14.861349 kernel: raid6: int64x2 gen() 6146 MB/s Apr 21 10:06:14.883109 kernel: raid6: int64x1 gen() 5069 MB/s Apr 21 10:06:14.883162 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Apr 21 10:06:14.905185 kernel: raid6: .... xor() 12028 MB/s, rmw enabled Apr 21 10:06:14.905235 kernel: raid6: using neon recovery algorithm Apr 21 10:06:14.912329 kernel: xor: measuring software checksum speed Apr 21 10:06:14.918190 kernel: 8regs : 18955 MB/sec Apr 21 10:06:14.918233 kernel: 32regs : 19636 MB/sec Apr 21 10:06:14.925139 kernel: arm64_neon : 26238 MB/sec Apr 21 10:06:14.925154 kernel: xor: using function: arm64_neon (26238 MB/sec) Apr 21 10:06:14.974341 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:06:14.984135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:06:14.996445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:06:15.015542 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 21 10:06:15.019993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:06:15.035566 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:06:15.050191 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Apr 21 10:06:15.079295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:06:15.092884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:06:15.130306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:06:15.144532 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:06:15.166118 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:06:15.173933 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:06:15.192759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:06:15.205579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:06:15.222485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:06:15.242790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:06:15.246756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:15.257331 kernel: hv_vmbus: Vmbus version:5.3 Apr 21 10:06:15.264116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:15.271410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:15.279504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.294195 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.846148 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 10:06:15.846171 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 10:06:15.846181 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 10:06:15.846190 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 10:06:15.846217 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 10:06:15.846232 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 10:06:15.846242 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 10:06:15.846252 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 10:06:15.846408 kernel: scsi host0: storvsc_host_t Apr 21 10:06:15.846505 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 10:06:15.846515 kernel: scsi host1: storvsc_host_t Apr 21 10:06:15.846599 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 10:06:15.846699 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 10:06:15.846793 kernel: PTP clock support registered Apr 21 10:06:15.846804 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 10:06:15.846813 kernel: hv_vmbus: registering driver hv_utils Apr 21 10:06:15.846822 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 10:06:15.846831 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 10:06:15.846840 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 10:06:15.846852 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 10:06:15.846936 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:06:15.846946 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 10:06:15.355610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.874124 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 10:06:15.874357 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:06:15.874448 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 10:06:15.874530 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 10:06:15.874612 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:15.874627 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:06:15.874711 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 10:06:15.817479 systemd-resolved[254]: Clock change detected. Flushing caches. Apr 21 10:06:15.887995 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:06:15.905921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.923143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:15.928445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 10:06:15.923296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.938707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.954472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.972771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.991842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 10:06:15.994425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:16.026008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:16.056075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 10:06:16.079221 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Apr 21 10:06:16.093276 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (499) Apr 21 10:06:16.100807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 10:06:16.116620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 10:06:16.126561 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 10:06:16.137793 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 10:06:16.157422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:06:16.179225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:16.187220 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:17.197214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:17.198249 disk-uuid[592]: The operation has completed successfully. Apr 21 10:06:17.265429 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:06:17.267225 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:06:17.302352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:06:17.315091 sh[705]: Success Apr 21 10:06:17.335700 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 10:06:17.410273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:06:17.430335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:06:17.435423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:06:17.470893 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 10:06:17.470953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:17.477639 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:06:17.481856 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:06:17.485510 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:06:17.540905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:06:17.545367 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:06:17.563520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:06:17.574868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:06:17.609154 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:17.609224 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:17.613266 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:17.629370 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:17.642148 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:06:17.646625 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:17.654706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:06:17.668747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:06:17.691715 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:06:17.710425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:06:17.731534 systemd-networkd[889]: lo: Link UP Apr 21 10:06:17.731542 systemd-networkd[889]: lo: Gained carrier Apr 21 10:06:17.732293 systemd-networkd[889]: Enumeration completed Apr 21 10:06:17.732620 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:17.732622 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:06:17.733298 systemd-networkd[889]: eth0: Link UP Apr 21 10:06:17.733416 systemd-networkd[889]: eth0: Gained carrier Apr 21 10:06:17.733423 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:17.735502 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:06:17.748529 systemd[1]: Reached target network.target - Network. Apr 21 10:06:17.775242 systemd-networkd[889]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 10:06:17.895663 ignition[872]: Ignition 2.19.0 Apr 21 10:06:17.895673 ignition[872]: Stage: fetch-offline Apr 21 10:06:17.899830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:06:17.895709 ignition[872]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:17.895716 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:17.895800 ignition[872]: parsed url from cmdline: "" Apr 21 10:06:17.895803 ignition[872]: no config URL provided Apr 21 10:06:17.895808 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:06:17.938652 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: VF slot 1 added Apr 21 10:06:17.921454 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:06:17.895814 ignition[872]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:06:17.971686 kernel: hv_vmbus: registering driver hv_pci Apr 21 10:06:17.971709 kernel: hv_pci a8c5028a-26b3-4d40-8cfd-e2d8a4c94150: PCI VMBus probing: Using version 0x10004 Apr 21 10:06:17.971879 kernel: hv_pci a8c5028a-26b3-4d40-8cfd-e2d8a4c94150: PCI host bridge to bus 26b3:00 Apr 21 10:06:17.971958 kernel: pci_bus 26b3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 21 10:06:17.972055 kernel: pci_bus 26b3:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 10:06:17.895819 ignition[872]: failed to fetch config: resource requires networking Apr 21 10:06:17.980195 kernel: pci 26b3:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 21 10:06:17.896296 ignition[872]: Ignition finished successfully Apr 21 10:06:17.988309 kernel: pci 26b3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 21 10:06:17.956056 ignition[898]: Ignition 2.19.0 Apr 21 10:06:17.994233 kernel: pci 26b3:00:02.0: enabling Extended Tags Apr 21 10:06:17.956063 ignition[898]: Stage: fetch Apr 21 10:06:17.956310 ignition[898]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:17.956319 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:17.960992 ignition[898]: parsed url from cmdline: "" Apr 21 10:06:17.960998 ignition[898]: no config URL provided Apr 21 10:06:18.023985 kernel: pci 26b3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 26b3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 21 10:06:18.024174 kernel: pci_bus 26b3:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 10:06:18.024275 kernel: pci 26b3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 21 10:06:17.961008 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:06:17.961017 ignition[898]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:06:17.961037 ignition[898]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 10:06:18.058413 ignition[898]: GET result: OK Apr 21 10:06:18.058494 ignition[898]: config has been read from IMDS userdata Apr 21 10:06:18.058536 ignition[898]: parsing config with SHA512: a4ca9917e844b90c636e863f9c3497163a7aec661435abd8881b2381c56d13cb42b0d5cfb1d89f8502ca5a67ec36d2c8b10aefc9383f06cb7b40e9ad6c670cd2 Apr 21 10:06:18.068895 unknown[898]: fetched base config from "system" Apr 21 10:06:18.069261 ignition[898]: fetch: fetch complete Apr 21 10:06:18.068902 unknown[898]: fetched base config from "system" Apr 21 10:06:18.069265 ignition[898]: fetch: fetch passed Apr 21 10:06:18.068906 unknown[898]: fetched user config from "azure" Apr 21 10:06:18.069310 ignition[898]: Ignition finished successfully Apr 21 10:06:18.071113 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:06:18.111167 kernel: mlx5_core 26b3:00:02.0: enabling device (0000 -> 0002) Apr 21 10:06:18.111375 kernel: mlx5_core 26b3:00:02.0: firmware version: 16.30.5026 Apr 21 10:06:18.111620 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:06:18.130044 ignition[904]: Ignition 2.19.0 Apr 21 10:06:18.132623 ignition[904]: Stage: kargs Apr 21 10:06:18.132815 ignition[904]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:18.137629 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:06:18.132825 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:18.133678 ignition[904]: kargs: kargs passed Apr 21 10:06:18.133721 ignition[904]: Ignition finished successfully Apr 21 10:06:18.162369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:06:18.187047 ignition[913]: Ignition 2.19.0 Apr 21 10:06:18.187059 ignition[913]: Stage: disks Apr 21 10:06:18.188103 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:18.188136 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:18.196843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:06:18.192925 ignition[913]: disks: disks passed Apr 21 10:06:18.206492 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:06:18.192969 ignition[913]: Ignition finished successfully Apr 21 10:06:18.215488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:06:18.228387 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:06:18.236092 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:06:18.245360 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:06:18.267517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:06:18.300074 systemd-fsck[926]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 10:06:18.309329 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:06:18.327452 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:06:18.357212 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: VF registering: eth1 Apr 21 10:06:18.357406 kernel: mlx5_core 26b3:00:02.0 eth1: joined to eth0 Apr 21 10:06:18.357522 kernel: mlx5_core 26b3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 21 10:06:18.373224 kernel: mlx5_core 26b3:00:02.0 enP9907s1: renamed from eth1 Apr 21 10:06:18.379348 systemd-networkd[889]: eth1: Interface name change detected, renamed to enP9907s1. Apr 21 10:06:18.399214 kernel: EXT4-fs (sda9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 10:06:18.399968 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:06:18.403732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:06:18.430301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:06:18.440669 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:06:18.446363 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:06:18.456358 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:06:18.456391 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:06:18.484520 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:06:18.506744 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Apr 21 10:06:18.506794 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:18.514211 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:18.514244 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:18.514002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:06:18.528306 kernel: mlx5_core 26b3:00:02.0 enP9907s1: Link up Apr 21 10:06:18.523697 systemd-networkd[889]: enP9907s1: Link UP Apr 21 10:06:18.537221 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:18.538781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:06:18.564209 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: Data path switched to VF: enP9907s1 Apr 21 10:06:18.661284 coreos-metadata[943]: Apr 21 10:06:18.661 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 10:06:18.667752 coreos-metadata[943]: Apr 21 10:06:18.667 INFO Fetch successful Apr 21 10:06:18.667752 coreos-metadata[943]: Apr 21 10:06:18.667 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 10:06:18.681424 coreos-metadata[943]: Apr 21 10:06:18.681 INFO Fetch successful Apr 21 10:06:18.686157 coreos-metadata[943]: Apr 21 10:06:18.686 INFO wrote hostname ci-4081.3.7-a-75af1c63bf to /sysroot/etc/hostname Apr 21 10:06:18.693829 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:06:18.715901 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:06:18.729770 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:06:18.737995 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:06:18.742775 systemd-networkd[889]: enP9907s1: Gained carrier Apr 21 10:06:18.748832 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:06:18.826357 systemd-networkd[889]: eth0: Gained IPv6LL Apr 21 10:06:19.035439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:06:19.046427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:06:19.056162 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:06:19.090304 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:19.090282 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:06:19.117573 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:06:19.141377 ignition[1060]: INFO : Ignition 2.19.0 Apr 21 10:06:19.141377 ignition[1060]: INFO : Stage: mount Apr 21 10:06:19.152409 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:19.152409 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:19.152409 ignition[1060]: INFO : mount: mount passed Apr 21 10:06:19.152409 ignition[1060]: INFO : Ignition finished successfully Apr 21 10:06:19.149362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:06:19.171325 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:06:19.189982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:06:19.231306 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1071) Apr 21 10:06:19.241695 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:19.241734 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:19.245491 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:19.253215 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:19.254812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:06:19.283184 ignition[1088]: INFO : Ignition 2.19.0 Apr 21 10:06:19.287870 ignition[1088]: INFO : Stage: files Apr 21 10:06:19.287870 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:19.287870 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:19.287870 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:06:19.306180 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:06:19.306180 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:06:19.328053 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:06:19.333995 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:06:19.340329 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:06:19.334049 unknown[1088]: wrote ssh authorized keys file for user: core Apr 21 10:06:19.353742 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:06:19.362313 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 10:06:19.385125 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:06:19.451775 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 21 10:06:19.955990 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:06:20.790748 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:20.800955 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:06:20.806465 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: files passed Apr 21 10:06:20.816908 ignition[1088]: INFO : Ignition finished successfully Apr 21 10:06:20.819507 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:06:20.849485 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:06:20.864364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:06:20.883174 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:06:20.883285 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:06:20.915020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.915020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.929172 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.936498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:06:20.942570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:06:20.963441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:06:20.991033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:06:20.993239 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:06:21.002154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:06:21.011798 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:06:21.021048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:06:21.034451 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:06:21.053596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:06:21.065467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:06:21.080837 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:06:21.085937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:06:21.096483 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:06:21.105147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:06:21.105273 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:06:21.118391 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:06:21.123026 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:06:21.132020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:06:21.141619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:06:21.150234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:06:21.159577 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:06:21.168890 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:06:21.178800 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:06:21.187597 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:06:21.197120 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:06:21.205354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:06:21.205466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:06:21.217110 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:06:21.221876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:06:21.230906 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:06:21.235472 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:06:21.240944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:06:21.241050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:06:21.255284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:06:21.255469 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:06:21.264876 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:06:21.264965 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:06:21.274781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:06:21.326483 ignition[1141]: INFO : Ignition 2.19.0 Apr 21 10:06:21.326483 ignition[1141]: INFO : Stage: umount Apr 21 10:06:21.326483 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:21.326483 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:21.326483 ignition[1141]: INFO : umount: umount passed Apr 21 10:06:21.326483 ignition[1141]: INFO : Ignition finished successfully Apr 21 10:06:21.274868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:06:21.300422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:06:21.314387 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:06:21.314563 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:06:21.331540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:06:21.338485 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:06:21.338631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:06:21.350415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:06:21.350518 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:06:21.366352 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:06:21.366446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:06:21.376071 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:06:21.376614 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:06:21.376713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:06:21.387033 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:06:21.387089 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:06:21.397163 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:06:21.397236 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:06:21.401585 systemd[1]: Stopped target network.target - Network. Apr 21 10:06:21.410183 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:06:21.410238 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:06:21.419815 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:06:21.429882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:06:21.438216 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:06:21.443689 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:06:21.452875 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:06:21.465747 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:06:21.465806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:06:21.473941 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:06:21.473978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:06:21.484259 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:06:21.484307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:06:21.496794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:06:21.496846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:06:21.506591 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:06:21.515411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:06:21.525073 systemd-networkd[889]: eth0: DHCPv6 lease lost Apr 21 10:06:21.526629 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:06:21.526737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:06:21.536630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:06:21.536720 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:06:21.542991 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:06:21.543081 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:06:21.554665 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:06:21.554945 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:06:21.770363 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: Data path switched from VF: enP9907s1 Apr 21 10:06:21.571098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:06:21.571156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:06:21.579597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:06:21.579660 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:06:21.608595 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:06:21.615059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:06:21.615133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:06:21.625039 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:06:21.625092 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:06:21.636057 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:06:21.636108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:06:21.645479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:06:21.645519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:06:21.655767 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:06:21.698710 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:06:21.698886 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:06:21.710396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:06:21.710498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:06:21.719896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:06:21.719938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:06:21.729378 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:06:21.729426 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:06:21.744332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:06:21.744380 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:06:21.766203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:06:21.766291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:21.795423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:06:21.806651 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:06:21.806732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:06:21.817526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:21.817572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:21.829405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:06:21.829539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:06:21.892101 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:06:21.986933 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 21 10:06:21.892247 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:06:21.901591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:06:21.925367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:06:21.936495 systemd[1]: Switching root. Apr 21 10:06:22.003503 systemd-journald[217]: Journal stopped Apr 21 10:06:14.199908 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 21 10:06:14.199929 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 10:06:14.199936 kernel: KASLR enabled Apr 21 10:06:14.199942 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 21 10:06:14.199950 kernel: printk: bootconsole [pl11] enabled Apr 21 10:06:14.199955 kernel: efi: EFI v2.7 by EDK II Apr 21 10:06:14.199962 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 21 10:06:14.199968 kernel: random: crng init done Apr 21 10:06:14.199974 kernel: ACPI: Early table checksum verification disabled Apr 21 10:06:14.199980 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 21 10:06:14.199986 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.199992 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200000 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 10:06:14.200006 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200014 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200020 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200027 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200034 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200041 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200047 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 21 10:06:14.200054 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 10:06:14.200060 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 21 10:06:14.200066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 21 10:06:14.200073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 21 10:06:14.200079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 21 10:06:14.200085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 21 10:06:14.200092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 21 10:06:14.200098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 21 10:06:14.200106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 21 10:06:14.200112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 21 10:06:14.200118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 21 10:06:14.200125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 21 10:06:14.200131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 21 10:06:14.200137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 21 10:06:14.200144 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 21 10:06:14.200150 kernel: Zone ranges: Apr 21 10:06:14.200156 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 21 10:06:14.200162 kernel: DMA32 empty Apr 21 10:06:14.200168 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 21 10:06:14.200175 kernel: Movable zone start for each node Apr 21 10:06:14.200185 kernel: Early memory node ranges Apr 21 10:06:14.200192 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 21 10:06:14.200199 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 21 10:06:14.200206 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 21 10:06:14.200212 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 21 10:06:14.200220 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 21 10:06:14.200227 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 21 10:06:14.200234 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 21 10:06:14.200241 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 21 10:06:14.200247 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 21 10:06:14.200254 kernel: psci: probing for conduit method from ACPI. Apr 21 10:06:14.200261 kernel: psci: PSCIv1.1 detected in firmware. Apr 21 10:06:14.200267 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 10:06:14.200274 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 21 10:06:14.200280 kernel: psci: SMC Calling Convention v1.4 Apr 21 10:06:14.200287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 21 10:06:14.200294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 21 10:06:14.200302 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 10:06:14.200309 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 10:06:14.202331 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 10:06:14.202348 kernel: Detected PIPT I-cache on CPU0 Apr 21 10:06:14.202356 kernel: CPU features: detected: GIC system register CPU interface Apr 21 10:06:14.202363 kernel: CPU features: detected: Hardware dirty bit management Apr 21 10:06:14.202370 kernel: CPU features: detected: Spectre-BHB Apr 21 10:06:14.202377 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 21 10:06:14.202384 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 21 10:06:14.202391 kernel: CPU features: detected: ARM erratum 1418040 Apr 21 10:06:14.202398 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 21 10:06:14.202408 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 21 10:06:14.202415 kernel: alternatives: applying boot alternatives Apr 21 10:06:14.202424 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:06:14.202431 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:06:14.202438 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:06:14.202445 kernel: Fallback order for Node 0: 0 Apr 21 10:06:14.202451 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 21 10:06:14.202458 kernel: Policy zone: Normal Apr 21 10:06:14.202465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:06:14.202472 kernel: software IO TLB: area num 2. Apr 21 10:06:14.202479 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 21 10:06:14.202487 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 21 10:06:14.202494 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:06:14.202501 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:06:14.202520 kernel: rcu: RCU event tracing is enabled. Apr 21 10:06:14.202527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:06:14.202534 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:06:14.202541 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:06:14.202548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:06:14.202555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:06:14.202561 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 10:06:14.202568 kernel: GICv3: 960 SPIs implemented Apr 21 10:06:14.202576 kernel: GICv3: 0 Extended SPIs implemented Apr 21 10:06:14.202583 kernel: Root IRQ handler: gic_handle_irq Apr 21 10:06:14.202590 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 21 10:06:14.202597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 21 10:06:14.202604 kernel: ITS: No ITS available, not enabling LPIs Apr 21 10:06:14.202611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:06:14.202618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 10:06:14.202625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 21 10:06:14.202632 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 21 10:06:14.202639 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 21 10:06:14.202646 kernel: Console: colour dummy device 80x25 Apr 21 10:06:14.202655 kernel: printk: console [tty1] enabled Apr 21 10:06:14.202662 kernel: ACPI: Core revision 20230628 Apr 21 10:06:14.202669 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 21 10:06:14.202676 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:06:14.202683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:06:14.202690 kernel: landlock: Up and running. Apr 21 10:06:14.202697 kernel: SELinux: Initializing. Apr 21 10:06:14.202704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.202711 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.202719 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:06:14.202726 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:06:14.202734 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 21 10:06:14.202741 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 21 10:06:14.202747 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 10:06:14.202754 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:06:14.202761 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:06:14.202768 kernel: Remapping and enabling EFI services. Apr 21 10:06:14.202781 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:06:14.202789 kernel: Detected PIPT I-cache on CPU1 Apr 21 10:06:14.202796 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 21 10:06:14.202803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 10:06:14.202812 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 21 10:06:14.202819 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:06:14.202827 kernel: SMP: Total of 2 processors activated. Apr 21 10:06:14.202834 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 10:06:14.202841 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 21 10:06:14.202850 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 21 10:06:14.202858 kernel: CPU features: detected: CRC32 instructions Apr 21 10:06:14.202865 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 21 10:06:14.202873 kernel: CPU features: detected: LSE atomic instructions Apr 21 10:06:14.202880 kernel: CPU features: detected: Privileged Access Never Apr 21 10:06:14.202887 kernel: CPU: All CPU(s) started at EL1 Apr 21 10:06:14.202894 kernel: alternatives: applying system-wide alternatives Apr 21 10:06:14.202902 kernel: devtmpfs: initialized Apr 21 10:06:14.202909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:06:14.202918 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:06:14.202925 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:06:14.202933 kernel: SMBIOS 3.1.0 present. Apr 21 10:06:14.202940 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 21 10:06:14.202947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:06:14.202955 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 10:06:14.202962 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 10:06:14.202969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 10:06:14.202977 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:06:14.202986 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 21 10:06:14.202993 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:06:14.203000 kernel: cpuidle: using governor menu Apr 21 10:06:14.203008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 10:06:14.203015 kernel: ASID allocator initialised with 32768 entries Apr 21 10:06:14.203022 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:06:14.203030 kernel: Serial: AMBA PL011 UART driver Apr 21 10:06:14.203037 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 21 10:06:14.203045 kernel: Modules: 0 pages in range for non-PLT usage Apr 21 10:06:14.203053 kernel: Modules: 509008 pages in range for PLT usage Apr 21 10:06:14.203061 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:06:14.203068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:06:14.203076 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 10:06:14.203083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 10:06:14.203090 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:06:14.203097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:06:14.203105 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 10:06:14.203112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 10:06:14.203121 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:06:14.203128 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:06:14.203135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:06:14.203143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:06:14.203150 kernel: ACPI: Interpreter enabled Apr 21 10:06:14.203157 kernel: ACPI: Using GIC for interrupt routing Apr 21 10:06:14.203165 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 21 10:06:14.203172 kernel: printk: console [ttyAMA0] enabled Apr 21 10:06:14.203179 kernel: printk: bootconsole [pl11] disabled Apr 21 10:06:14.203188 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 21 10:06:14.203195 kernel: iommu: Default domain type: Translated Apr 21 10:06:14.203202 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 10:06:14.203210 kernel: efivars: Registered efivars operations Apr 21 10:06:14.203217 kernel: vgaarb: loaded Apr 21 10:06:14.203224 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 10:06:14.203232 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:06:14.203239 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:06:14.203246 kernel: pnp: PnP ACPI init Apr 21 10:06:14.203255 kernel: pnp: PnP ACPI: found 0 devices Apr 21 10:06:14.203262 kernel: NET: Registered PF_INET protocol family Apr 21 10:06:14.203269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:06:14.203277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:06:14.203284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:06:14.203291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:06:14.203299 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:06:14.203306 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:06:14.203313 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.205368 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:06:14.205380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:06:14.205389 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:06:14.205396 kernel: kvm [1]: HYP mode not available Apr 21 10:06:14.205404 kernel: Initialise system trusted keyrings Apr 21 10:06:14.205412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:06:14.205419 kernel: Key type asymmetric registered Apr 21 10:06:14.205427 kernel: Asymmetric key parser 'x509' registered Apr 21 10:06:14.205434 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 10:06:14.205444 kernel: io scheduler mq-deadline registered Apr 21 10:06:14.205451 kernel: io scheduler kyber registered Apr 21 10:06:14.205458 kernel: io scheduler bfq registered Apr 21 10:06:14.205466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:06:14.205473 kernel: thunder_xcv, ver 1.0 Apr 21 10:06:14.205480 kernel: thunder_bgx, ver 1.0 Apr 21 10:06:14.205488 kernel: nicpf, ver 1.0 Apr 21 10:06:14.205495 kernel: nicvf, ver 1.0 Apr 21 10:06:14.205647 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 10:06:14.205723 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T10:06:13 UTC (1776765973) Apr 21 10:06:14.205733 kernel: efifb: probing for efifb Apr 21 10:06:14.205741 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 10:06:14.205748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 10:06:14.205755 kernel: efifb: scrolling: redraw Apr 21 10:06:14.205763 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:06:14.205770 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 10:06:14.205778 kernel: fb0: EFI VGA frame buffer device Apr 21 10:06:14.205788 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 21 10:06:14.205795 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:06:14.205803 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 21 10:06:14.205810 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 10:06:14.205817 kernel: watchdog: Hard watchdog permanently disabled Apr 21 10:06:14.205825 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:06:14.205832 kernel: Segment Routing with IPv6 Apr 21 10:06:14.205839 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:06:14.205847 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:06:14.205856 kernel: Key type dns_resolver registered Apr 21 10:06:14.205863 kernel: registered taskstats version 1 Apr 21 10:06:14.205870 kernel: Loading compiled-in X.509 certificates Apr 21 10:06:14.205878 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 10:06:14.205885 kernel: Key type .fscrypt registered Apr 21 10:06:14.205893 kernel: Key type fscrypt-provisioning registered Apr 21 10:06:14.205900 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:06:14.205907 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:06:14.205915 kernel: ima: No architecture policies found Apr 21 10:06:14.205924 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 10:06:14.205931 kernel: clk: Disabling unused clocks Apr 21 10:06:14.205939 kernel: Freeing unused kernel memory: 39424K Apr 21 10:06:14.205946 kernel: Run /init as init process Apr 21 10:06:14.205953 kernel: with arguments: Apr 21 10:06:14.205961 kernel: /init Apr 21 10:06:14.205968 kernel: with environment: Apr 21 10:06:14.205975 kernel: HOME=/ Apr 21 10:06:14.205982 kernel: TERM=linux Apr 21 10:06:14.205992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:06:14.206003 systemd[1]: Detected virtualization microsoft. Apr 21 10:06:14.206012 systemd[1]: Detected architecture arm64. Apr 21 10:06:14.206020 systemd[1]: Running in initrd. Apr 21 10:06:14.206027 systemd[1]: No hostname configured, using default hostname. Apr 21 10:06:14.206035 systemd[1]: Hostname set to . Apr 21 10:06:14.206043 systemd[1]: Initializing machine ID from random generator. Apr 21 10:06:14.206053 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:06:14.206061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:06:14.206069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:06:14.206078 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:06:14.206087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:06:14.206095 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:06:14.206103 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:06:14.206112 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:06:14.206122 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:06:14.206130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:06:14.206138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:06:14.206147 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:06:14.206154 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:06:14.206162 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:06:14.206170 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:06:14.206178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:06:14.206188 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:06:14.206196 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:06:14.206204 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:06:14.206212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:06:14.206220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:06:14.206228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:06:14.206236 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:06:14.206244 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:06:14.206254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:06:14.206262 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:06:14.206270 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:06:14.206278 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:06:14.206286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:06:14.206311 systemd-journald[217]: Collecting audit messages is disabled. Apr 21 10:06:14.206347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:14.206356 systemd-journald[217]: Journal started Apr 21 10:06:14.206375 systemd-journald[217]: Runtime Journal (/run/log/journal/5e090497f3114640aee79e408ee8f4cc) is 8.0M, max 78.5M, 70.5M free. Apr 21 10:06:14.207838 systemd-modules-load[218]: Inserted module 'overlay' Apr 21 10:06:14.221267 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:06:14.224640 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:06:14.234092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:06:14.261145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:06:14.261169 kernel: Bridge firewalling registered Apr 21 10:06:14.258520 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:06:14.261132 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 21 10:06:14.268426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:06:14.277035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:14.295666 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:14.310494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:06:14.318589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:06:14.347468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:06:14.353399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:14.364865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:06:14.370190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:06:14.384344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:06:14.406601 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:06:14.418752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:06:14.432456 dracut-cmdline[251]: dracut-dracut-053 Apr 21 10:06:14.443590 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 10:06:14.435545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:06:14.483351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:06:14.504690 systemd-resolved[254]: Positive Trust Anchors: Apr 21 10:06:14.504704 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:06:14.504737 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:06:14.506947 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 21 10:06:14.507725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:06:14.551368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:06:14.574328 kernel: SCSI subsystem initialized Apr 21 10:06:14.581327 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:06:14.591363 kernel: iscsi: registered transport (tcp) Apr 21 10:06:14.607711 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:06:14.607755 kernel: QLogic iSCSI HBA Driver Apr 21 10:06:14.646528 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:06:14.661432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:06:14.691656 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:06:14.691707 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:06:14.697516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:06:14.745343 kernel: raid6: neonx8 gen() 15801 MB/s Apr 21 10:06:14.764333 kernel: raid6: neonx4 gen() 15691 MB/s Apr 21 10:06:14.783339 kernel: raid6: neonx2 gen() 13234 MB/s Apr 21 10:06:14.803334 kernel: raid6: neonx1 gen() 10555 MB/s Apr 21 10:06:14.822329 kernel: raid6: int64x8 gen() 6975 MB/s Apr 21 10:06:14.841349 kernel: raid6: int64x4 gen() 7372 MB/s Apr 21 10:06:14.861349 kernel: raid6: int64x2 gen() 6146 MB/s Apr 21 10:06:14.883109 kernel: raid6: int64x1 gen() 5069 MB/s Apr 21 10:06:14.883162 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Apr 21 10:06:14.905185 kernel: raid6: .... xor() 12028 MB/s, rmw enabled Apr 21 10:06:14.905235 kernel: raid6: using neon recovery algorithm Apr 21 10:06:14.912329 kernel: xor: measuring software checksum speed Apr 21 10:06:14.918190 kernel: 8regs : 18955 MB/sec Apr 21 10:06:14.918233 kernel: 32regs : 19636 MB/sec Apr 21 10:06:14.925139 kernel: arm64_neon : 26238 MB/sec Apr 21 10:06:14.925154 kernel: xor: using function: arm64_neon (26238 MB/sec) Apr 21 10:06:14.974341 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:06:14.984135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:06:14.996445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:06:15.015542 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 21 10:06:15.019993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:06:15.035566 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:06:15.050191 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Apr 21 10:06:15.079295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:06:15.092884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:06:15.130306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:06:15.144532 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:06:15.166118 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:06:15.173933 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:06:15.192759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:06:15.205579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:06:15.222485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:06:15.242790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:06:15.246756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:15.257331 kernel: hv_vmbus: Vmbus version:5.3 Apr 21 10:06:15.264116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:15.271410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:15.279504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.294195 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.846148 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 10:06:15.846171 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 10:06:15.846181 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 10:06:15.846190 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 10:06:15.846217 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 10:06:15.846232 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 10:06:15.846242 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 10:06:15.846252 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 10:06:15.846408 kernel: scsi host0: storvsc_host_t Apr 21 10:06:15.846505 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 10:06:15.846515 kernel: scsi host1: storvsc_host_t Apr 21 10:06:15.846599 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 10:06:15.846699 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 10:06:15.846793 kernel: PTP clock support registered Apr 21 10:06:15.846804 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 10:06:15.846813 kernel: hv_vmbus: registering driver hv_utils Apr 21 10:06:15.846822 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 10:06:15.846831 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 10:06:15.846840 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 10:06:15.846852 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 10:06:15.846936 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:06:15.846946 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 10:06:15.355610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.874124 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 10:06:15.874357 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:06:15.874448 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 10:06:15.874530 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 10:06:15.874612 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:15.874627 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:06:15.874711 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 10:06:15.817479 systemd-resolved[254]: Clock change detected. Flushing caches. Apr 21 10:06:15.887995 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:06:15.905921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.923143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:15.928445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 10:06:15.923296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.938707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.954472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:15.972771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:15.991842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 10:06:15.994425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:06:16.026008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:16.056075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 10:06:16.079221 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Apr 21 10:06:16.093276 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (499) Apr 21 10:06:16.100807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 10:06:16.116620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 10:06:16.126561 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 10:06:16.137793 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 10:06:16.157422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:06:16.179225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:16.187220 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:17.197214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:06:17.198249 disk-uuid[592]: The operation has completed successfully. Apr 21 10:06:17.265429 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:06:17.267225 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:06:17.302352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:06:17.315091 sh[705]: Success Apr 21 10:06:17.335700 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 10:06:17.410273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:06:17.430335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:06:17.435423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:06:17.470893 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 10:06:17.470953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:17.477639 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:06:17.481856 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:06:17.485510 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:06:17.540905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:06:17.545367 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:06:17.563520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:06:17.574868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:06:17.609154 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:17.609224 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:17.613266 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:17.629370 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:17.642148 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:06:17.646625 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:17.654706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:06:17.668747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:06:17.691715 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:06:17.710425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:06:17.731534 systemd-networkd[889]: lo: Link UP Apr 21 10:06:17.731542 systemd-networkd[889]: lo: Gained carrier Apr 21 10:06:17.732293 systemd-networkd[889]: Enumeration completed Apr 21 10:06:17.732620 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:17.732622 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:06:17.733298 systemd-networkd[889]: eth0: Link UP Apr 21 10:06:17.733416 systemd-networkd[889]: eth0: Gained carrier Apr 21 10:06:17.733423 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:17.735502 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:06:17.748529 systemd[1]: Reached target network.target - Network. Apr 21 10:06:17.775242 systemd-networkd[889]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 10:06:17.895663 ignition[872]: Ignition 2.19.0 Apr 21 10:06:17.895673 ignition[872]: Stage: fetch-offline Apr 21 10:06:17.899830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:06:17.895709 ignition[872]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:17.895716 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:17.895800 ignition[872]: parsed url from cmdline: "" Apr 21 10:06:17.895803 ignition[872]: no config URL provided Apr 21 10:06:17.895808 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:06:17.938652 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: VF slot 1 added Apr 21 10:06:17.921454 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:06:17.895814 ignition[872]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:06:17.971686 kernel: hv_vmbus: registering driver hv_pci Apr 21 10:06:17.971709 kernel: hv_pci a8c5028a-26b3-4d40-8cfd-e2d8a4c94150: PCI VMBus probing: Using version 0x10004 Apr 21 10:06:17.971879 kernel: hv_pci a8c5028a-26b3-4d40-8cfd-e2d8a4c94150: PCI host bridge to bus 26b3:00 Apr 21 10:06:17.971958 kernel: pci_bus 26b3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 21 10:06:17.972055 kernel: pci_bus 26b3:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 10:06:17.895819 ignition[872]: failed to fetch config: resource requires networking Apr 21 10:06:17.980195 kernel: pci 26b3:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 21 10:06:17.896296 ignition[872]: Ignition finished successfully Apr 21 10:06:17.988309 kernel: pci 26b3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 21 10:06:17.956056 ignition[898]: Ignition 2.19.0 Apr 21 10:06:17.994233 kernel: pci 26b3:00:02.0: enabling Extended Tags Apr 21 10:06:17.956063 ignition[898]: Stage: fetch Apr 21 10:06:17.956310 ignition[898]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:17.956319 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:17.960992 ignition[898]: parsed url from cmdline: "" Apr 21 10:06:17.960998 ignition[898]: no config URL provided Apr 21 10:06:18.023985 kernel: pci 26b3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 26b3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 21 10:06:18.024174 kernel: pci_bus 26b3:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 10:06:18.024275 kernel: pci 26b3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 21 10:06:17.961008 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:06:17.961017 ignition[898]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:06:17.961037 ignition[898]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 10:06:18.058413 ignition[898]: GET result: OK Apr 21 10:06:18.058494 ignition[898]: config has been read from IMDS userdata Apr 21 10:06:18.058536 ignition[898]: parsing config with SHA512: a4ca9917e844b90c636e863f9c3497163a7aec661435abd8881b2381c56d13cb42b0d5cfb1d89f8502ca5a67ec36d2c8b10aefc9383f06cb7b40e9ad6c670cd2 Apr 21 10:06:18.068895 unknown[898]: fetched base config from "system" Apr 21 10:06:18.069261 ignition[898]: fetch: fetch complete Apr 21 10:06:18.068902 unknown[898]: fetched base config from "system" Apr 21 10:06:18.069265 ignition[898]: fetch: fetch passed Apr 21 10:06:18.068906 unknown[898]: fetched user config from "azure" Apr 21 10:06:18.069310 ignition[898]: Ignition finished successfully Apr 21 10:06:18.071113 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:06:18.111167 kernel: mlx5_core 26b3:00:02.0: enabling device (0000 -> 0002) Apr 21 10:06:18.111375 kernel: mlx5_core 26b3:00:02.0: firmware version: 16.30.5026 Apr 21 10:06:18.111620 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:06:18.130044 ignition[904]: Ignition 2.19.0 Apr 21 10:06:18.132623 ignition[904]: Stage: kargs Apr 21 10:06:18.132815 ignition[904]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:18.137629 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:06:18.132825 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:18.133678 ignition[904]: kargs: kargs passed Apr 21 10:06:18.133721 ignition[904]: Ignition finished successfully Apr 21 10:06:18.162369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:06:18.187047 ignition[913]: Ignition 2.19.0 Apr 21 10:06:18.187059 ignition[913]: Stage: disks Apr 21 10:06:18.188103 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:18.188136 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:18.196843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:06:18.192925 ignition[913]: disks: disks passed Apr 21 10:06:18.206492 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:06:18.192969 ignition[913]: Ignition finished successfully Apr 21 10:06:18.215488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:06:18.228387 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:06:18.236092 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:06:18.245360 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:06:18.267517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:06:18.300074 systemd-fsck[926]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 10:06:18.309329 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:06:18.327452 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:06:18.357212 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: VF registering: eth1 Apr 21 10:06:18.357406 kernel: mlx5_core 26b3:00:02.0 eth1: joined to eth0 Apr 21 10:06:18.357522 kernel: mlx5_core 26b3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 21 10:06:18.373224 kernel: mlx5_core 26b3:00:02.0 enP9907s1: renamed from eth1 Apr 21 10:06:18.379348 systemd-networkd[889]: eth1: Interface name change detected, renamed to enP9907s1. Apr 21 10:06:18.399214 kernel: EXT4-fs (sda9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 10:06:18.399968 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:06:18.403732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:06:18.430301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:06:18.440669 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:06:18.446363 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:06:18.456358 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:06:18.456391 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:06:18.484520 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:06:18.506744 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Apr 21 10:06:18.506794 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:18.514211 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:18.514244 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:18.514002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:06:18.528306 kernel: mlx5_core 26b3:00:02.0 enP9907s1: Link up Apr 21 10:06:18.523697 systemd-networkd[889]: enP9907s1: Link UP Apr 21 10:06:18.537221 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:18.538781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:06:18.564209 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: Data path switched to VF: enP9907s1 Apr 21 10:06:18.661284 coreos-metadata[943]: Apr 21 10:06:18.661 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 10:06:18.667752 coreos-metadata[943]: Apr 21 10:06:18.667 INFO Fetch successful Apr 21 10:06:18.667752 coreos-metadata[943]: Apr 21 10:06:18.667 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 10:06:18.681424 coreos-metadata[943]: Apr 21 10:06:18.681 INFO Fetch successful Apr 21 10:06:18.686157 coreos-metadata[943]: Apr 21 10:06:18.686 INFO wrote hostname ci-4081.3.7-a-75af1c63bf to /sysroot/etc/hostname Apr 21 10:06:18.693829 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:06:18.715901 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:06:18.729770 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:06:18.737995 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:06:18.742775 systemd-networkd[889]: enP9907s1: Gained carrier Apr 21 10:06:18.748832 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:06:18.826357 systemd-networkd[889]: eth0: Gained IPv6LL Apr 21 10:06:19.035439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:06:19.046427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:06:19.056162 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:06:19.090304 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:19.090282 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:06:19.117573 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:06:19.141377 ignition[1060]: INFO : Ignition 2.19.0 Apr 21 10:06:19.141377 ignition[1060]: INFO : Stage: mount Apr 21 10:06:19.152409 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:19.152409 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:19.152409 ignition[1060]: INFO : mount: mount passed Apr 21 10:06:19.152409 ignition[1060]: INFO : Ignition finished successfully Apr 21 10:06:19.149362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:06:19.171325 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:06:19.189982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:06:19.231306 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1071) Apr 21 10:06:19.241695 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 10:06:19.241734 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 10:06:19.245491 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:06:19.253215 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:06:19.254812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:06:19.283184 ignition[1088]: INFO : Ignition 2.19.0 Apr 21 10:06:19.287870 ignition[1088]: INFO : Stage: files Apr 21 10:06:19.287870 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:19.287870 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:19.287870 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:06:19.306180 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:06:19.306180 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:06:19.328053 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:06:19.333995 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:06:19.340329 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:06:19.334049 unknown[1088]: wrote ssh authorized keys file for user: core Apr 21 10:06:19.353742 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:06:19.362313 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 10:06:19.385125 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:06:19.451775 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:19.460273 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 21 10:06:19.955990 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:06:20.790748 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 21 10:06:20.800955 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:06:20.806465 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:06:20.816908 ignition[1088]: INFO : files: files passed Apr 21 10:06:20.816908 ignition[1088]: INFO : Ignition finished successfully Apr 21 10:06:20.819507 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:06:20.849485 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:06:20.864364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:06:20.883174 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:06:20.883285 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:06:20.915020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.915020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.929172 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:06:20.936498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:06:20.942570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:06:20.963441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:06:20.991033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:06:20.993239 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:06:21.002154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:06:21.011798 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:06:21.021048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:06:21.034451 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:06:21.053596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:06:21.065467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:06:21.080837 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:06:21.085937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:06:21.096483 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:06:21.105147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:06:21.105273 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:06:21.118391 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:06:21.123026 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:06:21.132020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:06:21.141619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:06:21.150234 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:06:21.159577 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:06:21.168890 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:06:21.178800 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:06:21.187597 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:06:21.197120 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:06:21.205354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:06:21.205466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:06:21.217110 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:06:21.221876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:06:21.230906 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:06:21.235472 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:06:21.240944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:06:21.241050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:06:21.255284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:06:21.255469 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:06:21.264876 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:06:21.264965 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:06:21.274781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:06:21.326483 ignition[1141]: INFO : Ignition 2.19.0 Apr 21 10:06:21.326483 ignition[1141]: INFO : Stage: umount Apr 21 10:06:21.326483 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:06:21.326483 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 10:06:21.326483 ignition[1141]: INFO : umount: umount passed Apr 21 10:06:21.326483 ignition[1141]: INFO : Ignition finished successfully Apr 21 10:06:21.274868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:06:21.300422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:06:21.314387 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:06:21.314563 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:06:21.331540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:06:21.338485 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:06:21.338631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:06:21.350415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:06:21.350518 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:06:21.366352 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:06:21.366446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:06:21.376071 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:06:21.376614 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:06:21.376713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:06:21.387033 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:06:21.387089 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:06:21.397163 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:06:21.397236 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:06:21.401585 systemd[1]: Stopped target network.target - Network. Apr 21 10:06:21.410183 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:06:21.410238 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:06:21.419815 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:06:21.429882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:06:21.438216 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:06:21.443689 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:06:21.452875 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:06:21.465747 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:06:21.465806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:06:21.473941 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:06:21.473978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:06:21.484259 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:06:21.484307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:06:21.496794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:06:21.496846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:06:21.506591 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:06:21.515411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:06:21.525073 systemd-networkd[889]: eth0: DHCPv6 lease lost Apr 21 10:06:21.526629 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:06:21.526737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:06:21.536630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:06:21.536720 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:06:21.542991 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:06:21.543081 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:06:21.554665 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:06:21.554945 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:06:21.770363 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: Data path switched from VF: enP9907s1 Apr 21 10:06:21.571098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:06:21.571156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:06:21.579597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:06:21.579660 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:06:21.608595 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:06:21.615059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:06:21.615133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:06:21.625039 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:06:21.625092 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:06:21.636057 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:06:21.636108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:06:21.645479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:06:21.645519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:06:21.655767 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:06:21.698710 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:06:21.698886 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:06:21.710396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:06:21.710498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:06:21.719896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:06:21.719938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:06:21.729378 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:06:21.729426 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:06:21.744332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:06:21.744380 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:06:21.766203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:06:21.766291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:06:21.795423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:06:21.806651 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:06:21.806732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:06:21.817526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:06:21.817572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:21.829405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:06:21.829539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:06:21.892101 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:06:21.986933 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 21 10:06:21.892247 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:06:21.901591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:06:21.925367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:06:21.936495 systemd[1]: Switching root. Apr 21 10:06:22.003503 systemd-journald[217]: Journal stopped Apr 21 10:06:24.046170 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:06:24.046192 kernel: SELinux: policy capability open_perms=1 Apr 21 10:06:24.046213 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:06:24.046223 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:06:24.046233 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:06:24.046240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:06:24.046249 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:06:24.046259 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:06:24.046269 systemd[1]: Successfully loaded SELinux policy in 71.361ms. Apr 21 10:06:24.046278 kernel: audit: type=1403 audit(1776765982.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:06:24.046289 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.298ms. Apr 21 10:06:24.046299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:06:24.046307 systemd[1]: Detected virtualization microsoft. Apr 21 10:06:24.046316 systemd[1]: Detected architecture arm64. Apr 21 10:06:24.046325 systemd[1]: Detected first boot. Apr 21 10:06:24.046337 systemd[1]: Hostname set to . Apr 21 10:06:24.046346 systemd[1]: Initializing machine ID from random generator. Apr 21 10:06:24.046354 zram_generator::config[1183]: No configuration found. Apr 21 10:06:24.046364 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:06:24.046373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:06:24.046382 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:06:24.046391 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:06:24.046402 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:06:24.046411 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:06:24.046421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:06:24.046430 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:06:24.046439 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:06:24.046448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:06:24.046458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:06:24.046469 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:06:24.046478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:06:24.046488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:06:24.046497 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:06:24.046506 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:06:24.046515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:06:24.046525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:06:24.046534 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 21 10:06:24.046544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:06:24.046553 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:06:24.046563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:06:24.046574 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:06:24.046583 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:06:24.046593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:06:24.046602 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:06:24.046611 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:06:24.046622 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:06:24.046631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:06:24.046641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:06:24.046650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:06:24.046660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:06:24.046670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:06:24.046681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:06:24.046691 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:06:24.046700 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:06:24.046710 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:06:24.046719 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:06:24.046729 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:06:24.046738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:06:24.046750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:06:24.046760 systemd[1]: Reached target machines.target - Containers. Apr 21 10:06:24.046769 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:06:24.046779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:06:24.046788 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:06:24.046798 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:06:24.046807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:06:24.046817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:06:24.046828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:06:24.046837 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:06:24.046847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:06:24.046857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:06:24.046867 kernel: ACPI: bus type drm_connector registered Apr 21 10:06:24.046876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:06:24.046885 kernel: fuse: init (API version 7.39) Apr 21 10:06:24.046894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:06:24.046904 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:06:24.046915 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:06:24.046924 kernel: loop: module loaded Apr 21 10:06:24.046933 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:06:24.046942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:06:24.046952 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:06:24.046977 systemd-journald[1286]: Collecting audit messages is disabled. Apr 21 10:06:24.046999 systemd-journald[1286]: Journal started Apr 21 10:06:24.047020 systemd-journald[1286]: Runtime Journal (/run/log/journal/6b170b665a704be7a01e26f924de405d) is 8.0M, max 78.5M, 70.5M free. Apr 21 10:06:23.347015 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:06:23.390942 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:06:23.391285 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:06:23.391599 systemd[1]: systemd-journald.service: Consumed 2.598s CPU time. Apr 21 10:06:24.059988 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:06:24.072245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:06:24.084380 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:06:24.084451 systemd[1]: Stopped verity-setup.service. Apr 21 10:06:24.100226 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:06:24.100499 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:06:24.105337 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:06:24.110175 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:06:24.114742 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:06:24.120073 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:06:24.125525 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:06:24.129899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:06:24.135508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:06:24.141575 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:06:24.141709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:06:24.147447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:06:24.147581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:06:24.153293 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:06:24.153408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:06:24.158451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:06:24.158577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:06:24.164357 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:06:24.164482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:06:24.169652 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:06:24.169767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:06:24.174767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:06:24.180310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:06:24.185892 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:06:24.191778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:06:24.205631 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:06:24.217290 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:06:24.225329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:06:24.232371 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:06:24.232412 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:06:24.237934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:06:24.244684 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:06:24.250972 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:06:24.255595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:06:24.258520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:06:24.273733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:06:24.279336 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:06:24.280323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:06:24.287540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:06:24.292445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:06:24.308235 systemd-journald[1286]: Time spent on flushing to /var/log/journal/6b170b665a704be7a01e26f924de405d is 40.522ms for 892 entries. Apr 21 10:06:24.308235 systemd-journald[1286]: System Journal (/var/log/journal/6b170b665a704be7a01e26f924de405d) is 8.0M, max 2.6G, 2.6G free. Apr 21 10:06:24.384783 systemd-journald[1286]: Received client request to flush runtime journal. Apr 21 10:06:24.384826 kernel: loop0: detected capacity change from 0 to 31320 Apr 21 10:06:24.302623 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:06:24.315658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:06:24.323432 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:06:24.334903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:06:24.347320 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:06:24.355855 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:06:24.362774 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:06:24.372287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:06:24.388121 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:06:24.396100 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:06:24.404109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:06:24.416386 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:06:24.434079 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:06:24.449416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:06:24.467760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:06:24.482047 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:06:24.483379 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:06:24.503905 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Apr 21 10:06:24.504248 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Apr 21 10:06:24.508894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:06:24.516850 kernel: loop1: detected capacity change from 0 to 114328 Apr 21 10:06:24.618227 kernel: loop2: detected capacity change from 0 to 200864 Apr 21 10:06:24.674230 kernel: loop3: detected capacity change from 0 to 114432 Apr 21 10:06:24.773245 kernel: loop4: detected capacity change from 0 to 31320 Apr 21 10:06:24.787405 kernel: loop5: detected capacity change from 0 to 114328 Apr 21 10:06:24.805227 kernel: loop6: detected capacity change from 0 to 200864 Apr 21 10:06:24.837231 kernel: loop7: detected capacity change from 0 to 114432 Apr 21 10:06:24.864415 (sd-merge)[1342]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 21 10:06:24.866471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:06:24.869626 (sd-merge)[1342]: Merged extensions into '/usr'. Apr 21 10:06:24.875542 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:06:24.875557 systemd[1]: Reloading... Apr 21 10:06:24.941237 zram_generator::config[1371]: No configuration found. Apr 21 10:06:25.057149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:25.112650 systemd[1]: Reloading finished in 236 ms. Apr 21 10:06:25.140911 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:06:25.154406 systemd[1]: Starting ensure-sysext.service... Apr 21 10:06:25.161416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:06:25.169389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:06:25.184330 systemd[1]: Reloading requested from client PID 1423 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:06:25.184345 systemd[1]: Reloading... Apr 21 10:06:25.203591 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:06:25.203855 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:06:25.206737 systemd-udevd[1425]: Using default interface naming scheme 'v255'. Apr 21 10:06:25.208580 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:06:25.208820 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Apr 21 10:06:25.208872 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Apr 21 10:06:25.218453 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:06:25.218464 systemd-tmpfiles[1424]: Skipping /boot Apr 21 10:06:25.230998 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:06:25.231014 systemd-tmpfiles[1424]: Skipping /boot Apr 21 10:06:25.280474 zram_generator::config[1454]: No configuration found. Apr 21 10:06:25.452939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:25.520808 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 21 10:06:25.520909 systemd[1]: Reloading finished in 336 ms. Apr 21 10:06:25.532220 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:06:25.532300 kernel: hv_vmbus: registering driver hv_balloon Apr 21 10:06:25.532317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 10:06:25.561499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:06:25.581317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:06:25.593757 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 21 10:06:25.593845 kernel: hv_vmbus: registering driver hyperv_fb Apr 21 10:06:25.602310 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 21 10:06:25.623418 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 21 10:06:25.623494 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 21 10:06:25.635814 kernel: Console: switching to colour dummy device 80x25 Apr 21 10:06:25.652223 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1455) Apr 21 10:06:25.652308 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 10:06:25.657886 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:06:25.674614 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:06:25.681700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:06:25.688600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:06:25.707489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:06:25.719893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:06:25.728442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:06:25.738593 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:06:25.751619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:06:25.767787 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:06:25.773937 augenrules[1606]: No rules Apr 21 10:06:25.781182 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:06:25.791013 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:06:25.796881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:06:25.797265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:06:25.803029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:06:25.803404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:06:25.810235 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:06:25.810375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:06:25.816498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:06:25.844735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:06:25.869233 systemd[1]: Finished ensure-sysext.service. Apr 21 10:06:25.881659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 10:06:25.888031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:06:25.892877 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:06:25.894545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:06:25.903374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:06:25.913399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:06:25.921385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:06:25.926073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:06:25.928519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:06:25.934175 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:06:25.942574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:06:25.957165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:06:25.965770 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:06:25.971820 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:06:25.979261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:06:25.979407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:06:25.985717 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:06:25.985851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:06:25.991783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:06:25.991916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:06:25.999274 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:06:25.999604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:06:26.005678 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:06:26.013983 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:06:26.038504 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:06:26.048067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:06:26.048148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:06:26.055433 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:06:26.068261 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:06:26.078100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:06:26.093270 lvm[1643]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:06:26.094677 systemd-networkd[1602]: lo: Link UP Apr 21 10:06:26.094689 systemd-networkd[1602]: lo: Gained carrier Apr 21 10:06:26.098822 systemd-networkd[1602]: Enumeration completed Apr 21 10:06:26.098945 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:06:26.104365 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:26.104375 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:06:26.105617 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:06:26.122505 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:06:26.128285 systemd-resolved[1604]: Positive Trust Anchors: Apr 21 10:06:26.128299 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:06:26.128331 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:06:26.131269 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:06:26.135412 systemd-resolved[1604]: Using system hostname 'ci-4081.3.7-a-75af1c63bf'. Apr 21 10:06:26.139401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:06:26.152501 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:06:26.159821 lvm[1651]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:06:26.177700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:06:26.187515 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:06:26.192869 kernel: mlx5_core 26b3:00:02.0 enP9907s1: Link up Apr 21 10:06:26.220462 kernel: hv_netvsc 000d3af7-1454-000d-3af7-1454000d3af7 eth0: Data path switched to VF: enP9907s1 Apr 21 10:06:26.221814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:06:26.221826 systemd-networkd[1602]: enP9907s1: Link UP Apr 21 10:06:26.221939 systemd-networkd[1602]: eth0: Link UP Apr 21 10:06:26.221942 systemd-networkd[1602]: eth0: Gained carrier Apr 21 10:06:26.221957 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:26.227160 systemd[1]: Reached target network.target - Network. Apr 21 10:06:26.231186 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:06:26.236341 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:06:26.236810 systemd-networkd[1602]: enP9907s1: Gained carrier Apr 21 10:06:26.241486 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:06:26.247112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:06:26.253023 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:06:26.257849 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:06:26.263661 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:06:26.264253 systemd-networkd[1602]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 10:06:26.269709 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:06:26.269746 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:06:26.273972 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:06:26.279931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:06:26.286215 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:06:26.294071 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:06:26.299301 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:06:26.304215 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:06:26.308778 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:06:26.314469 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:06:26.314501 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:06:26.335290 systemd[1]: Starting chronyd.service - NTP client/server... Apr 21 10:06:26.343353 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:06:26.358189 (chronyd)[1657]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 21 10:06:26.360409 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:06:26.367728 chronyd[1661]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 21 10:06:26.370351 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:06:26.375577 chronyd[1661]: Timezone right/UTC failed leap second check, ignoring Apr 21 10:06:26.375775 chronyd[1661]: Loaded seccomp filter (level 2) Apr 21 10:06:26.376829 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:06:26.385410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:06:26.391942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:06:26.392141 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 21 10:06:26.396546 jq[1665]: false Apr 21 10:06:26.396803 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 21 10:06:26.404983 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 21 10:06:26.406113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:06:26.412741 KVP[1667]: KVP starting; pid is:1667 Apr 21 10:06:26.416782 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:06:26.432234 kernel: hv_utils: KVP IC version 4.0 Apr 21 10:06:26.431243 KVP[1667]: KVP LIC Version: 3.1 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found loop4 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found loop5 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found loop6 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found loop7 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda1 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda2 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda3 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found usr Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda4 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda6 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda7 Apr 21 10:06:26.432373 extend-filesystems[1666]: Found sda9 Apr 21 10:06:26.432373 extend-filesystems[1666]: Checking size of /dev/sda9 Apr 21 10:06:26.620728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1455) Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.520 INFO Fetch successful Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.520 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.525 INFO Fetch successful Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.525 INFO Fetching http://168.63.129.16/machine/33c904c7-76a0-4d71-821f-e5d8c483c5fa/239d80fb%2Dedab%2D4177%2Dbe1e%2D62dc48e09160.%5Fci%2D4081.3.7%2Da%2D75af1c63bf?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.537 INFO Fetch successful Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.537 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 21 10:06:26.620810 coreos-metadata[1659]: Apr 21 10:06:26.552 INFO Fetch successful Apr 21 10:06:26.433126 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:06:26.448575 dbus-daemon[1662]: [system] SELinux support is enabled Apr 21 10:06:26.636261 extend-filesystems[1666]: Old size kept for /dev/sda9 Apr 21 10:06:26.636261 extend-filesystems[1666]: Found sr0 Apr 21 10:06:26.446374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:06:26.471409 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:06:26.482455 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:06:26.488855 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:06:26.654993 update_engine[1688]: I20260421 10:06:26.583532 1688 main.cc:92] Flatcar Update Engine starting Apr 21 10:06:26.654993 update_engine[1688]: I20260421 10:06:26.604760 1688 update_check_scheduler.cc:74] Next update check in 5m48s Apr 21 10:06:26.492471 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:06:26.663725 jq[1691]: true Apr 21 10:06:26.518336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:06:26.533532 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:06:26.544535 systemd[1]: Started chronyd.service - NTP client/server. Apr 21 10:06:26.573473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:06:26.575531 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:06:26.575824 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:06:26.575960 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:06:26.630510 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:06:26.630711 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:06:26.645961 systemd-logind[1682]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 21 10:06:26.646457 systemd-logind[1682]: New seat seat0. Apr 21 10:06:26.646678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:06:26.646854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:06:26.663766 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:06:26.692024 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:06:26.699008 dbus-daemon[1662]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:06:26.699881 jq[1722]: true Apr 21 10:06:26.705585 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:06:26.718934 tar[1714]: linux-arm64/LICENSE Apr 21 10:06:26.721729 tar[1714]: linux-arm64/helm Apr 21 10:06:26.732083 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:06:26.739191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:06:26.739594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:06:26.739716 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:06:26.749645 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:06:26.749758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:06:26.765865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:06:26.809447 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:06:26.797454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:06:26.814848 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:06:26.905329 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:06:26.997856 containerd[1723]: time="2026-04-21T10:06:26.997772740Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:06:27.054182 containerd[1723]: time="2026-04-21T10:06:27.054128620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060527900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060570620Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060595940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060748660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060765060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060824380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060836700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.060995740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.061012020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.061025100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061607 containerd[1723]: time="2026-04-21T10:06:27.061034820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061097820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061298860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061393260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061407620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061481460Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:06:27.061879 containerd[1723]: time="2026-04-21T10:06:27.061519620Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:06:27.080034 containerd[1723]: time="2026-04-21T10:06:27.079964460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:06:27.080034 containerd[1723]: time="2026-04-21T10:06:27.080024140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:06:27.080034 containerd[1723]: time="2026-04-21T10:06:27.080041620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:06:27.080374 containerd[1723]: time="2026-04-21T10:06:27.080056780Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:06:27.080374 containerd[1723]: time="2026-04-21T10:06:27.080071380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:06:27.080374 containerd[1723]: time="2026-04-21T10:06:27.080251700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:06:27.080530 containerd[1723]: time="2026-04-21T10:06:27.080492900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:06:27.080659 containerd[1723]: time="2026-04-21T10:06:27.080594540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:06:27.080659 containerd[1723]: time="2026-04-21T10:06:27.080616100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:06:27.080659 containerd[1723]: time="2026-04-21T10:06:27.080631020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:06:27.080659 containerd[1723]: time="2026-04-21T10:06:27.080646180Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080659 containerd[1723]: time="2026-04-21T10:06:27.080659580Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080673380Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080688060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080703820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080717460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080730060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080742620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080761740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080776300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080788460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080802220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080814500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080827020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.080845 containerd[1723]: time="2026-04-21T10:06:27.080838620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080867460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080882740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080897980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080910300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080923620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080935740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080951980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080971580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080983020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.080993700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.081044500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.081062860Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:06:27.081547 containerd[1723]: time="2026-04-21T10:06:27.081073620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:06:27.084300 containerd[1723]: time="2026-04-21T10:06:27.081087500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:06:27.084300 containerd[1723]: time="2026-04-21T10:06:27.081097340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.084300 containerd[1723]: time="2026-04-21T10:06:27.081109180Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:06:27.084300 containerd[1723]: time="2026-04-21T10:06:27.081119820Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:06:27.084300 containerd[1723]: time="2026-04-21T10:06:27.081131620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.081418980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.081476900Z" level=info msg="Connect containerd service" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.081509660Z" level=info msg="using legacy CRI server" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.081517380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.081601780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082138620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082257660Z" level=info msg="Start subscribing containerd event" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082320260Z" level=info msg="Start recovering state" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082387740Z" level=info msg="Start event monitor" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082398460Z" level=info msg="Start snapshots syncer" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082408820Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:06:27.084444 containerd[1723]: time="2026-04-21T10:06:27.082415980Z" level=info msg="Start streaming server" Apr 21 10:06:27.090888 containerd[1723]: time="2026-04-21T10:06:27.090852380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:06:27.096556 containerd[1723]: time="2026-04-21T10:06:27.090909820Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:06:27.096556 containerd[1723]: time="2026-04-21T10:06:27.090973340Z" level=info msg="containerd successfully booted in 0.094592s" Apr 21 10:06:27.091078 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:06:27.270276 tar[1714]: linux-arm64/README.md Apr 21 10:06:27.280242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:06:27.530374 systemd-networkd[1602]: eth0: Gained IPv6LL Apr 21 10:06:27.538278 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:06:27.545318 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:06:27.557794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:27.570159 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:06:27.606680 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:06:27.889524 sshd_keygen[1692]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:06:27.911216 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:06:27.923443 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:06:27.933526 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 21 10:06:27.945077 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:06:27.947488 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:06:27.963960 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:06:27.974703 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 21 10:06:27.992234 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:06:28.007874 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:06:28.018436 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 21 10:06:28.024801 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:06:28.324388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:28.331294 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:06:28.333105 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:06:28.342302 systemd[1]: Startup finished in 614ms (kernel) + 8.272s (initrd) + 5.908s (userspace) = 14.795s. Apr 21 10:06:28.464244 login[1804]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 21 10:06:28.464981 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:28.479085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:06:28.485463 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:06:28.488861 systemd-logind[1682]: New session 1 of user core. Apr 21 10:06:28.502289 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:06:28.510526 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:06:28.514786 (systemd)[1821]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:06:28.642072 systemd[1821]: Queued start job for default target default.target. Apr 21 10:06:28.650733 systemd[1821]: Created slice app.slice - User Application Slice. Apr 21 10:06:28.650761 systemd[1821]: Reached target paths.target - Paths. Apr 21 10:06:28.650773 systemd[1821]: Reached target timers.target - Timers. Apr 21 10:06:28.652605 systemd[1821]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:06:28.675892 systemd[1821]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:06:28.676481 systemd[1821]: Reached target sockets.target - Sockets. Apr 21 10:06:28.676952 systemd[1821]: Reached target basic.target - Basic System. Apr 21 10:06:28.677000 systemd[1821]: Reached target default.target - Main User Target. Apr 21 10:06:28.677029 systemd[1821]: Startup finished in 156ms. Apr 21 10:06:28.677325 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:06:28.682578 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:06:28.716057 waagent[1800]: 2026-04-21T10:06:28.713770Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 21 10:06:28.720634 waagent[1800]: 2026-04-21T10:06:28.720437Z INFO Daemon Daemon OS: flatcar 4081.3.7 Apr 21 10:06:28.725129 waagent[1800]: 2026-04-21T10:06:28.724497Z INFO Daemon Daemon Python: 3.11.9 Apr 21 10:06:28.729320 waagent[1800]: 2026-04-21T10:06:28.728962Z INFO Daemon Daemon Run daemon Apr 21 10:06:28.733060 waagent[1800]: 2026-04-21T10:06:28.732292Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.7' Apr 21 10:06:28.740113 waagent[1800]: 2026-04-21T10:06:28.739338Z INFO Daemon Daemon Using waagent for provisioning Apr 21 10:06:28.743845 waagent[1800]: 2026-04-21T10:06:28.743793Z INFO Daemon Daemon Activate resource disk Apr 21 10:06:28.748446 waagent[1800]: 2026-04-21T10:06:28.748302Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 21 10:06:28.759590 waagent[1800]: 2026-04-21T10:06:28.759523Z INFO Daemon Daemon Found device: None Apr 21 10:06:28.764416 waagent[1800]: 2026-04-21T10:06:28.763928Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 21 10:06:28.771681 waagent[1800]: 2026-04-21T10:06:28.771606Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 21 10:06:28.783261 waagent[1800]: 2026-04-21T10:06:28.783030Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 10:06:28.787915 waagent[1800]: 2026-04-21T10:06:28.787734Z INFO Daemon Daemon Running default provisioning handler Apr 21 10:06:28.799693 waagent[1800]: 2026-04-21T10:06:28.799194Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 21 10:06:28.811335 waagent[1800]: 2026-04-21T10:06:28.811263Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 21 10:06:28.820757 waagent[1800]: 2026-04-21T10:06:28.820483Z INFO Daemon Daemon cloud-init is enabled: False Apr 21 10:06:28.825404 waagent[1800]: 2026-04-21T10:06:28.825326Z INFO Daemon Daemon Copying ovf-env.xml Apr 21 10:06:28.868033 waagent[1800]: 2026-04-21T10:06:28.866941Z INFO Daemon Daemon Successfully mounted dvd Apr 21 10:06:28.884818 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 21 10:06:28.888774 waagent[1800]: 2026-04-21T10:06:28.888690Z INFO Daemon Daemon Detect protocol endpoint Apr 21 10:06:28.892941 waagent[1800]: 2026-04-21T10:06:28.892854Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 10:06:28.897453 waagent[1800]: 2026-04-21T10:06:28.897402Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 21 10:06:28.902728 waagent[1800]: 2026-04-21T10:06:28.902683Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 21 10:06:28.906932 waagent[1800]: 2026-04-21T10:06:28.906885Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 21 10:06:28.911032 waagent[1800]: 2026-04-21T10:06:28.910989Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 21 10:06:28.934506 waagent[1800]: 2026-04-21T10:06:28.934461Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 21 10:06:28.939922 waagent[1800]: 2026-04-21T10:06:28.939892Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 21 10:06:28.945743 kubelet[1810]: E0421 10:06:28.941083 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:06:28.945959 waagent[1800]: 2026-04-21T10:06:28.944181Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 21 10:06:28.949119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:06:28.949262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:06:29.077365 waagent[1800]: 2026-04-21T10:06:29.077268Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 21 10:06:29.082828 waagent[1800]: 2026-04-21T10:06:29.082773Z INFO Daemon Daemon Forcing an update of the goal state. Apr 21 10:06:29.090346 waagent[1800]: 2026-04-21T10:06:29.090298Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 10:06:29.108209 waagent[1800]: 2026-04-21T10:06:29.108165Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 21 10:06:29.113296 waagent[1800]: 2026-04-21T10:06:29.113255Z INFO Daemon Apr 21 10:06:29.115506 waagent[1800]: 2026-04-21T10:06:29.115470Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b3735736-cf0b-4514-9769-72081883005b eTag: 13639255991001642348 source: Fabric] Apr 21 10:06:29.124519 waagent[1800]: 2026-04-21T10:06:29.124479Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 21 10:06:29.129893 waagent[1800]: 2026-04-21T10:06:29.129852Z INFO Daemon Apr 21 10:06:29.132130 waagent[1800]: 2026-04-21T10:06:29.132096Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 21 10:06:29.140822 waagent[1800]: 2026-04-21T10:06:29.140789Z INFO Daemon Daemon Downloading artifacts profile blob Apr 21 10:06:29.219256 waagent[1800]: 2026-04-21T10:06:29.219104Z INFO Daemon Downloaded certificate {'thumbprint': '59E8924A185B113073017D3D80EEB93767042BC3', 'hasPrivateKey': True} Apr 21 10:06:29.227012 waagent[1800]: 2026-04-21T10:06:29.226965Z INFO Daemon Fetch goal state completed Apr 21 10:06:29.236971 waagent[1800]: 2026-04-21T10:06:29.236930Z INFO Daemon Daemon Starting provisioning Apr 21 10:06:29.241038 waagent[1800]: 2026-04-21T10:06:29.240996Z INFO Daemon Daemon Handle ovf-env.xml. Apr 21 10:06:29.244758 waagent[1800]: 2026-04-21T10:06:29.244722Z INFO Daemon Daemon Set hostname [ci-4081.3.7-a-75af1c63bf] Apr 21 10:06:29.255040 waagent[1800]: 2026-04-21T10:06:29.254984Z INFO Daemon Daemon Publish hostname [ci-4081.3.7-a-75af1c63bf] Apr 21 10:06:29.260039 waagent[1800]: 2026-04-21T10:06:29.259989Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 21 10:06:29.265407 waagent[1800]: 2026-04-21T10:06:29.265365Z INFO Daemon Daemon Primary interface is [eth0] Apr 21 10:06:29.285008 systemd-networkd[1602]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:06:29.285014 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:06:29.285057 systemd-networkd[1602]: eth0: DHCP lease lost Apr 21 10:06:29.286560 waagent[1800]: 2026-04-21T10:06:29.286485Z INFO Daemon Daemon Create user account if not exists Apr 21 10:06:29.291158 waagent[1800]: 2026-04-21T10:06:29.291112Z INFO Daemon Daemon User core already exists, skip useradd Apr 21 10:06:29.292245 systemd-networkd[1602]: eth0: DHCPv6 lease lost Apr 21 10:06:29.295842 waagent[1800]: 2026-04-21T10:06:29.295792Z INFO Daemon Daemon Configure sudoer Apr 21 10:06:29.299682 waagent[1800]: 2026-04-21T10:06:29.299633Z INFO Daemon Daemon Configure sshd Apr 21 10:06:29.303159 waagent[1800]: 2026-04-21T10:06:29.303112Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 21 10:06:29.313942 waagent[1800]: 2026-04-21T10:06:29.313464Z INFO Daemon Daemon Deploy ssh public key. Apr 21 10:06:29.323253 systemd-networkd[1602]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 10:06:29.465738 login[1804]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:29.469911 systemd-logind[1682]: New session 2 of user core. Apr 21 10:06:29.476343 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:06:30.389942 waagent[1800]: 2026-04-21T10:06:30.389879Z INFO Daemon Daemon Provisioning complete Apr 21 10:06:30.405192 waagent[1800]: 2026-04-21T10:06:30.405145Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 21 10:06:30.410156 waagent[1800]: 2026-04-21T10:06:30.410114Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 21 10:06:30.418107 waagent[1800]: 2026-04-21T10:06:30.418058Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 21 10:06:30.546884 waagent[1872]: 2026-04-21T10:06:30.546809Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 21 10:06:30.547880 waagent[1872]: 2026-04-21T10:06:30.547363Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.7 Apr 21 10:06:30.547880 waagent[1872]: 2026-04-21T10:06:30.547439Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 21 10:06:30.756391 waagent[1872]: 2026-04-21T10:06:30.756255Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.7; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 21 10:06:30.758216 waagent[1872]: 2026-04-21T10:06:30.756660Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 10:06:30.758216 waagent[1872]: 2026-04-21T10:06:30.756734Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 10:06:30.764705 waagent[1872]: 2026-04-21T10:06:30.764628Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 10:06:30.771422 waagent[1872]: 2026-04-21T10:06:30.771376Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 21 10:06:30.772056 waagent[1872]: 2026-04-21T10:06:30.772017Z INFO ExtHandler Apr 21 10:06:30.772224 waagent[1872]: 2026-04-21T10:06:30.772175Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 91dc2171-6092-436e-ac7f-df059baef76b eTag: 13639255991001642348 source: Fabric] Apr 21 10:06:30.772617 waagent[1872]: 2026-04-21T10:06:30.772579Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 21 10:06:30.773310 waagent[1872]: 2026-04-21T10:06:30.773265Z INFO ExtHandler Apr 21 10:06:30.773453 waagent[1872]: 2026-04-21T10:06:30.773422Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 21 10:06:30.776735 waagent[1872]: 2026-04-21T10:06:30.776705Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 21 10:06:30.855897 waagent[1872]: 2026-04-21T10:06:30.855818Z INFO ExtHandler Downloaded certificate {'thumbprint': '59E8924A185B113073017D3D80EEB93767042BC3', 'hasPrivateKey': True} Apr 21 10:06:30.856665 waagent[1872]: 2026-04-21T10:06:30.856611Z INFO ExtHandler Fetch goal state completed Apr 21 10:06:30.870823 waagent[1872]: 2026-04-21T10:06:30.870762Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1872 Apr 21 10:06:30.871107 waagent[1872]: 2026-04-21T10:06:30.871069Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 21 10:06:30.872843 waagent[1872]: 2026-04-21T10:06:30.872797Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.7', '', 'Flatcar Container Linux by Kinvolk'] Apr 21 10:06:30.873355 waagent[1872]: 2026-04-21T10:06:30.873315Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 21 10:06:31.114350 waagent[1872]: 2026-04-21T10:06:31.114136Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 21 10:06:31.114886 waagent[1872]: 2026-04-21T10:06:31.114813Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 21 10:06:31.122047 waagent[1872]: 2026-04-21T10:06:31.121961Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 21 10:06:31.129671 systemd[1]: Reloading requested from client PID 1885 ('systemctl') (unit waagent.service)... Apr 21 10:06:31.129685 systemd[1]: Reloading... Apr 21 10:06:31.228228 zram_generator::config[1919]: No configuration found. Apr 21 10:06:31.330836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:31.408428 systemd[1]: Reloading finished in 278 ms. Apr 21 10:06:31.436883 waagent[1872]: 2026-04-21T10:06:31.436492Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 21 10:06:31.443998 systemd[1]: Reloading requested from client PID 1973 ('systemctl') (unit waagent.service)... Apr 21 10:06:31.444012 systemd[1]: Reloading... Apr 21 10:06:31.519497 zram_generator::config[2005]: No configuration found. Apr 21 10:06:31.629116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:06:31.703678 systemd[1]: Reloading finished in 259 ms. Apr 21 10:06:31.734051 waagent[1872]: 2026-04-21T10:06:31.733292Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 21 10:06:31.734051 waagent[1872]: 2026-04-21T10:06:31.733454Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 21 10:06:31.845421 waagent[1872]: 2026-04-21T10:06:31.845347Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 21 10:06:31.846085 waagent[1872]: 2026-04-21T10:06:31.846041Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 21 10:06:31.846942 waagent[1872]: 2026-04-21T10:06:31.846892Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 21 10:06:31.847077 waagent[1872]: 2026-04-21T10:06:31.847026Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 10:06:31.847207 waagent[1872]: 2026-04-21T10:06:31.847159Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 10:06:31.847669 waagent[1872]: 2026-04-21T10:06:31.847614Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 21 10:06:31.847941 waagent[1872]: 2026-04-21T10:06:31.847826Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 21 10:06:31.848412 waagent[1872]: 2026-04-21T10:06:31.848356Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 21 10:06:31.848560 waagent[1872]: 2026-04-21T10:06:31.848525Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 10:06:31.848638 waagent[1872]: 2026-04-21T10:06:31.848609Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 10:06:31.848784 waagent[1872]: 2026-04-21T10:06:31.848746Z INFO EnvHandler ExtHandler Configure routes Apr 21 10:06:31.848840 waagent[1872]: 2026-04-21T10:06:31.848816Z INFO EnvHandler ExtHandler Gateway:None Apr 21 10:06:31.848884 waagent[1872]: 2026-04-21T10:06:31.848862Z INFO EnvHandler ExtHandler Routes:None Apr 21 10:06:31.849333 waagent[1872]: 2026-04-21T10:06:31.849284Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 21 10:06:31.849844 waagent[1872]: 2026-04-21T10:06:31.849785Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 21 10:06:31.850223 waagent[1872]: 2026-04-21T10:06:31.850028Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 21 10:06:31.851566 waagent[1872]: 2026-04-21T10:06:31.851448Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 21 10:06:31.851566 waagent[1872]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 21 10:06:31.851566 waagent[1872]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 21 10:06:31.851566 waagent[1872]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 21 10:06:31.851566 waagent[1872]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 21 10:06:31.851566 waagent[1872]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 10:06:31.851566 waagent[1872]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 10:06:31.852097 waagent[1872]: 2026-04-21T10:06:31.852026Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 21 10:06:31.862276 waagent[1872]: 2026-04-21T10:06:31.860974Z INFO ExtHandler ExtHandler Apr 21 10:06:31.862276 waagent[1872]: 2026-04-21T10:06:31.861086Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 36ffcd79-2c97-400c-9a93-15a21a8150bb correlation 46d40327-9bc2-4cd1-b4fe-216522d75dfe created: 2026-04-21T10:05:54.801437Z] Apr 21 10:06:31.862276 waagent[1872]: 2026-04-21T10:06:31.861466Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 21 10:06:31.862276 waagent[1872]: 2026-04-21T10:06:31.861993Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 21 10:06:34.359097 waagent[1872]: 2026-04-21T10:06:34.359017Z INFO MonitorHandler ExtHandler Network interfaces: Apr 21 10:06:34.359097 waagent[1872]: Executing ['ip', '-a', '-o', 'link']: Apr 21 10:06:34.359097 waagent[1872]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 21 10:06:34.359097 waagent[1872]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:14:54 brd ff:ff:ff:ff:ff:ff Apr 21 10:06:34.359097 waagent[1872]: 3: enP9907s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:14:54 brd ff:ff:ff:ff:ff:ff\ altname enP9907p0s2 Apr 21 10:06:34.359097 waagent[1872]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 21 10:06:34.359097 waagent[1872]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 21 10:06:34.359097 waagent[1872]: 2: eth0 inet 10.0.0.5/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 21 10:06:34.359097 waagent[1872]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 21 10:06:34.359097 waagent[1872]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 21 10:06:34.359097 waagent[1872]: 2: eth0 inet6 fe80::20d:3aff:fef7:1454/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 21 10:06:34.451013 waagent[1872]: 2026-04-21T10:06:34.450834Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D02E0531-5CE9-4D9C-A3FE-69F9EEE9837B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 21 10:06:34.759360 waagent[1872]: 2026-04-21T10:06:34.758950Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 21 10:06:34.759360 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.759360 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.759360 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.759360 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.759360 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.759360 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.759360 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 10:06:34.759360 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 10:06:34.759360 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 10:06:34.761968 waagent[1872]: 2026-04-21T10:06:34.761908Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 21 10:06:34.761968 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.761968 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.761968 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.761968 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.761968 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 10:06:34.761968 waagent[1872]: pkts bytes target prot opt in out source destination Apr 21 10:06:34.761968 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 10:06:34.761968 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 10:06:34.761968 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 10:06:34.762223 waagent[1872]: 2026-04-21T10:06:34.762177Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 21 10:06:39.051334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:06:39.059453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:39.155655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:39.159783 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:06:39.274165 kubelet[2100]: E0421 10:06:39.274114 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:06:39.277253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:06:39.277391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:06:41.878726 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:06:41.886698 systemd[1]: Started sshd@0-10.0.0.5:22-20.229.252.112:49966.service - OpenSSH per-connection server daemon (20.229.252.112:49966). Apr 21 10:06:44.302225 sshd[2108]: Accepted publickey for core from 20.229.252.112 port 49966 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:44.303097 sshd[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:44.307337 systemd-logind[1682]: New session 3 of user core. Apr 21 10:06:44.312381 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:06:45.089898 systemd[1]: Started sshd@1-10.0.0.5:22-20.229.252.112:36816.service - OpenSSH per-connection server daemon (20.229.252.112:36816). Apr 21 10:06:45.980232 sshd[2113]: Accepted publickey for core from 20.229.252.112 port 36816 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:45.981157 sshd[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:45.985676 systemd-logind[1682]: New session 4 of user core. Apr 21 10:06:45.992391 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:06:46.607580 sshd[2113]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:46.610754 systemd[1]: sshd@1-10.0.0.5:22-20.229.252.112:36816.service: Deactivated successfully. Apr 21 10:06:46.612739 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:06:46.613362 systemd-logind[1682]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:06:46.614084 systemd-logind[1682]: Removed session 4. Apr 21 10:06:46.763742 systemd[1]: Started sshd@2-10.0.0.5:22-20.229.252.112:36826.service - OpenSSH per-connection server daemon (20.229.252.112:36826). Apr 21 10:06:47.673194 sshd[2120]: Accepted publickey for core from 20.229.252.112 port 36826 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:47.674002 sshd[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:47.677686 systemd-logind[1682]: New session 5 of user core. Apr 21 10:06:47.688370 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:06:48.302971 sshd[2120]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:48.306246 systemd[1]: sshd@2-10.0.0.5:22-20.229.252.112:36826.service: Deactivated successfully. Apr 21 10:06:48.307942 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:06:48.309705 systemd-logind[1682]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:06:48.310563 systemd-logind[1682]: Removed session 5. Apr 21 10:06:48.459701 systemd[1]: Started sshd@3-10.0.0.5:22-20.229.252.112:36838.service - OpenSSH per-connection server daemon (20.229.252.112:36838). Apr 21 10:06:49.301228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:06:49.308997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:06:49.366845 sshd[2127]: Accepted publickey for core from 20.229.252.112 port 36838 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:49.369335 sshd[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:49.375597 systemd-logind[1682]: New session 6 of user core. Apr 21 10:06:49.383371 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:06:49.406992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:06:49.411488 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:06:49.579814 kubelet[2138]: E0421 10:06:49.579679 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:06:49.582207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:06:49.582337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:06:50.001433 sshd[2127]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:50.005099 systemd[1]: sshd@3-10.0.0.5:22-20.229.252.112:36838.service: Deactivated successfully. Apr 21 10:06:50.006789 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:06:50.007611 systemd-logind[1682]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:06:50.008421 systemd-logind[1682]: Removed session 6. Apr 21 10:06:50.152060 systemd[1]: Started sshd@4-10.0.0.5:22-20.229.252.112:36852.service - OpenSSH per-connection server daemon (20.229.252.112:36852). Apr 21 10:06:50.161251 chronyd[1661]: Selected source PHC0 Apr 21 10:06:51.019229 sshd[2149]: Accepted publickey for core from 20.229.252.112 port 36852 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:51.020120 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:51.023803 systemd-logind[1682]: New session 7 of user core. Apr 21 10:06:51.031343 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:06:51.511708 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:06:51.511976 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:06:51.525935 sudo[2152]: pam_unix(sudo:session): session closed for user root Apr 21 10:06:51.678644 sshd[2149]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:51.682526 systemd[1]: sshd@4-10.0.0.5:22-20.229.252.112:36852.service: Deactivated successfully. Apr 21 10:06:51.684292 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:06:51.686755 systemd-logind[1682]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:06:51.687845 systemd-logind[1682]: Removed session 7. Apr 21 10:06:51.838863 systemd[1]: Started sshd@5-10.0.0.5:22-20.229.252.112:36866.service - OpenSSH per-connection server daemon (20.229.252.112:36866). Apr 21 10:06:52.752224 sshd[2157]: Accepted publickey for core from 20.229.252.112 port 36866 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:52.753101 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:52.756809 systemd-logind[1682]: New session 8 of user core. Apr 21 10:06:52.764345 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:06:53.239682 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:06:53.240295 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:06:53.243336 sudo[2161]: pam_unix(sudo:session): session closed for user root Apr 21 10:06:53.247963 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:06:53.248295 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:06:53.258420 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:06:53.261612 auditctl[2164]: No rules Apr 21 10:06:53.262069 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:06:53.262245 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:06:53.264514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:06:53.294653 augenrules[2182]: No rules Apr 21 10:06:53.296140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:06:53.297658 sudo[2160]: pam_unix(sudo:session): session closed for user root Apr 21 10:06:53.446339 sshd[2157]: pam_unix(sshd:session): session closed for user core Apr 21 10:06:53.449748 systemd-logind[1682]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:06:53.450287 systemd[1]: sshd@5-10.0.0.5:22-20.229.252.112:36866.service: Deactivated successfully. Apr 21 10:06:53.451896 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:06:53.452840 systemd-logind[1682]: Removed session 8. Apr 21 10:06:53.611913 systemd[1]: Started sshd@6-10.0.0.5:22-20.229.252.112:36878.service - OpenSSH per-connection server daemon (20.229.252.112:36878). Apr 21 10:06:54.537166 sshd[2190]: Accepted publickey for core from 20.229.252.112 port 36878 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:06:54.538500 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:06:54.542911 systemd-logind[1682]: New session 9 of user core. Apr 21 10:06:54.549331 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:06:55.026722 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:06:55.026989 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:06:58.121586 (dockerd)[2208]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:06:58.121626 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:06:59.801224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 10:06:59.804241 dockerd[2208]: time="2026-04-21T10:06:59.803977142Z" level=info msg="Starting up" Apr 21 10:06:59.808340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:04.230806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:04.234897 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:07:04.277469 kubelet[2235]: E0421 10:07:04.277414 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:07:04.280435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:07:04.280702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:07:05.518728 dockerd[2208]: time="2026-04-21T10:07:05.518678480Z" level=info msg="Loading containers: start." Apr 21 10:07:06.508315 kernel: Initializing XFRM netlink socket Apr 21 10:07:06.882977 systemd-networkd[1602]: docker0: Link UP Apr 21 10:07:07.291433 dockerd[2208]: time="2026-04-21T10:07:07.291327358Z" level=info msg="Loading containers: done." Apr 21 10:07:07.301870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4150420325-merged.mount: Deactivated successfully. Apr 21 10:07:08.258461 dockerd[2208]: time="2026-04-21T10:07:08.258094244Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:07:08.258461 dockerd[2208]: time="2026-04-21T10:07:08.258216884Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:07:08.258461 dockerd[2208]: time="2026-04-21T10:07:08.258330484Z" level=info msg="Daemon has completed initialization" Apr 21 10:07:09.313466 dockerd[2208]: time="2026-04-21T10:07:09.313390515Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:07:09.314722 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:07:09.706093 containerd[1723]: time="2026-04-21T10:07:09.705556216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 10:07:10.688102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778088552.mount: Deactivated successfully. Apr 21 10:07:11.553225 update_engine[1688]: I20260421 10:07:11.553072 1688 update_attempter.cc:509] Updating boot flags... Apr 21 10:07:11.604753 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2424) Apr 21 10:07:12.096348 containerd[1723]: time="2026-04-21T10:07:12.096302619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:12.101091 containerd[1723]: time="2026-04-21T10:07:12.100885734Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193768" Apr 21 10:07:12.105769 containerd[1723]: time="2026-04-21T10:07:12.104397370Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:12.108814 containerd[1723]: time="2026-04-21T10:07:12.108785205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:12.109893 containerd[1723]: time="2026-04-21T10:07:12.109864404Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 2.404259868s" Apr 21 10:07:12.110000 containerd[1723]: time="2026-04-21T10:07:12.109985124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 21 10:07:12.110527 containerd[1723]: time="2026-04-21T10:07:12.110499004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 10:07:13.729930 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 21 10:07:14.301244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 21 10:07:14.306364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:14.700518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:14.704070 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:07:14.734544 kubelet[2463]: E0421 10:07:14.734495 2463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:07:14.737148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:07:14.737430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:07:19.077455 containerd[1723]: time="2026-04-21T10:07:19.077385902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:19.080316 containerd[1723]: time="2026-04-21T10:07:19.080091819Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901444" Apr 21 10:07:19.083127 containerd[1723]: time="2026-04-21T10:07:19.082697575Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:19.087828 containerd[1723]: time="2026-04-21T10:07:19.087797328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:19.089417 containerd[1723]: time="2026-04-21T10:07:19.089383486Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 6.978756803s" Apr 21 10:07:19.089450 containerd[1723]: time="2026-04-21T10:07:19.089421966Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 21 10:07:19.089874 containerd[1723]: time="2026-04-21T10:07:19.089852365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 10:07:24.801390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 21 10:07:24.808537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:24.910952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:24.915003 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:07:24.947088 kubelet[2481]: E0421 10:07:24.947031 2481 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:07:24.949752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:07:24.949894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:07:31.553819 containerd[1723]: time="2026-04-21T10:07:31.553765965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:31.555965 containerd[1723]: time="2026-04-21T10:07:31.555740082Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047945" Apr 21 10:07:31.559220 containerd[1723]: time="2026-04-21T10:07:31.558820478Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:31.564796 containerd[1723]: time="2026-04-21T10:07:31.563633832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:31.564796 containerd[1723]: time="2026-04-21T10:07:31.564685231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 12.474804266s" Apr 21 10:07:31.564796 containerd[1723]: time="2026-04-21T10:07:31.564714911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 21 10:07:31.565367 containerd[1723]: time="2026-04-21T10:07:31.565327070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 10:07:32.611280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380601069.mount: Deactivated successfully. Apr 21 10:07:32.856236 containerd[1723]: time="2026-04-21T10:07:32.855680059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.858480 containerd[1723]: time="2026-04-21T10:07:32.858321976Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606286" Apr 21 10:07:32.863284 containerd[1723]: time="2026-04-21T10:07:32.861660492Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.866303 containerd[1723]: time="2026-04-21T10:07:32.866265766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:32.867059 containerd[1723]: time="2026-04-21T10:07:32.867029445Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.301669655s" Apr 21 10:07:32.867152 containerd[1723]: time="2026-04-21T10:07:32.867137405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 21 10:07:32.867830 containerd[1723]: time="2026-04-21T10:07:32.867800764Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 10:07:33.921026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264802128.mount: Deactivated successfully. Apr 21 10:07:35.051376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 21 10:07:35.058512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:35.167022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:35.171348 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:07:35.203680 kubelet[2512]: E0421 10:07:35.203631 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:07:35.206324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:07:35.206585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:07:45.301427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 21 10:07:45.307946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:50.119232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:50.130577 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:07:50.165502 kubelet[2530]: E0421 10:07:50.165436 2530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:07:50.167644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:07:50.167775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:07:51.247962 containerd[1723]: time="2026-04-21T10:07:51.247907350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.250412 containerd[1723]: time="2026-04-21T10:07:51.250139108Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Apr 21 10:07:51.253219 containerd[1723]: time="2026-04-21T10:07:51.252619185Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.259731 containerd[1723]: time="2026-04-21T10:07:51.259626656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.260682 containerd[1723]: time="2026-04-21T10:07:51.260539175Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 18.392702131s" Apr 21 10:07:51.260682 containerd[1723]: time="2026-04-21T10:07:51.260574735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 21 10:07:51.261286 containerd[1723]: time="2026-04-21T10:07:51.261097574Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:07:51.895824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227020825.mount: Deactivated successfully. Apr 21 10:07:51.912214 containerd[1723]: time="2026-04-21T10:07:51.912109749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.915948 containerd[1723]: time="2026-04-21T10:07:51.915920104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 21 10:07:51.919367 containerd[1723]: time="2026-04-21T10:07:51.919343540Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.923913 containerd[1723]: time="2026-04-21T10:07:51.923883855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:51.924721 containerd[1723]: time="2026-04-21T10:07:51.924694254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 663.56852ms" Apr 21 10:07:51.924763 containerd[1723]: time="2026-04-21T10:07:51.924725774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 21 10:07:51.925130 containerd[1723]: time="2026-04-21T10:07:51.925103893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 10:07:52.564761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767270127.mount: Deactivated successfully. Apr 21 10:07:53.558297 containerd[1723]: time="2026-04-21T10:07:53.558225842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:53.561371 containerd[1723]: time="2026-04-21T10:07:53.561344079Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139658" Apr 21 10:07:53.565328 containerd[1723]: time="2026-04-21T10:07:53.565277914Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:53.571255 containerd[1723]: time="2026-04-21T10:07:53.571196107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:07:53.573387 containerd[1723]: time="2026-04-21T10:07:53.573345464Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.648213851s" Apr 21 10:07:53.573387 containerd[1723]: time="2026-04-21T10:07:53.573382424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 21 10:07:58.793639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:58.801406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:58.832925 systemd[1]: Reloading requested from client PID 2674 ('systemctl') (unit session-9.scope)... Apr 21 10:07:58.832940 systemd[1]: Reloading... Apr 21 10:07:58.933241 zram_generator::config[2717]: No configuration found. Apr 21 10:07:59.037642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:07:59.115386 systemd[1]: Reloading finished in 282 ms. Apr 21 10:07:59.154008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:07:59.154088 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:07:59.154374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:59.157581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:07:59.351316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:07:59.356160 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:07:59.389350 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:07:59.389350 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:07:59.389350 kubelet[2781]: I0421 10:07:59.389043 2781 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:08:00.045115 kubelet[2781]: I0421 10:08:00.045077 2781 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:08:00.045115 kubelet[2781]: I0421 10:08:00.045106 2781 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:08:00.045347 kubelet[2781]: I0421 10:08:00.045131 2781 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:08:00.045347 kubelet[2781]: I0421 10:08:00.045137 2781 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:08:00.045392 kubelet[2781]: I0421 10:08:00.045359 2781 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:08:00.054121 kubelet[2781]: E0421 10:08:00.054069 2781 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:08:00.055222 kubelet[2781]: I0421 10:08:00.055071 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:08:00.058508 kubelet[2781]: E0421 10:08:00.058469 2781 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:08:00.059217 kubelet[2781]: I0421 10:08:00.058637 2781 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:08:00.061454 kubelet[2781]: I0421 10:08:00.061438 2781 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:08:00.061750 kubelet[2781]: I0421 10:08:00.061730 2781 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:08:00.061963 kubelet[2781]: I0421 10:08:00.061818 2781 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-75af1c63bf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:08:00.062075 kubelet[2781]: I0421 10:08:00.062065 2781 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:08:00.062122 kubelet[2781]: I0421 10:08:00.062115 2781 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:08:00.062290 kubelet[2781]: I0421 10:08:00.062279 2781 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:08:00.066780 kubelet[2781]: I0421 10:08:00.066761 2781 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:08:00.068042 kubelet[2781]: I0421 10:08:00.068025 2781 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:08:00.068128 kubelet[2781]: I0421 10:08:00.068118 2781 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:08:00.068303 kubelet[2781]: I0421 10:08:00.068212 2781 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:08:00.068303 kubelet[2781]: I0421 10:08:00.068238 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:08:00.070229 kubelet[2781]: E0421 10:08:00.069093 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-75af1c63bf&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:08:00.070893 kubelet[2781]: E0421 10:08:00.070865 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:08:00.070984 kubelet[2781]: I0421 10:08:00.070968 2781 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:08:00.071572 kubelet[2781]: I0421 10:08:00.071550 2781 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:08:00.071625 kubelet[2781]: I0421 10:08:00.071581 2781 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:08:00.071650 kubelet[2781]: W0421 10:08:00.071626 2781 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:08:00.074547 kubelet[2781]: I0421 10:08:00.074527 2781 server.go:1262] "Started kubelet" Apr 21 10:08:00.079887 kubelet[2781]: I0421 10:08:00.079680 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:08:00.082385 kubelet[2781]: E0421 10:08:00.080514 2781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-75af1c63bf.18a857560bcdbdc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-75af1c63bf,UID:ci-4081.3.7-a-75af1c63bf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-75af1c63bf,},FirstTimestamp:2026-04-21 10:08:00.074497478 +0000 UTC m=+0.715624105,LastTimestamp:2026-04-21 10:08:00.074497478 +0000 UTC m=+0.715624105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-75af1c63bf,}" Apr 21 10:08:00.083041 kubelet[2781]: I0421 10:08:00.082971 2781 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:08:00.084446 kubelet[2781]: I0421 10:08:00.083869 2781 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:08:00.086985 kubelet[2781]: I0421 10:08:00.086942 2781 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:08:00.087102 kubelet[2781]: I0421 10:08:00.087089 2781 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:08:00.087344 kubelet[2781]: I0421 10:08:00.087331 2781 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:08:00.087558 kubelet[2781]: I0421 10:08:00.087535 2781 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:08:00.087723 kubelet[2781]: E0421 10:08:00.087697 2781 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" Apr 21 10:08:00.087792 kubelet[2781]: I0421 10:08:00.087703 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:08:00.089273 kubelet[2781]: E0421 10:08:00.089233 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-75af1c63bf?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Apr 21 10:08:00.090458 kubelet[2781]: I0421 10:08:00.089968 2781 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:08:00.090458 kubelet[2781]: I0421 10:08:00.090058 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:08:00.090458 kubelet[2781]: I0421 10:08:00.090366 2781 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:08:00.090458 kubelet[2781]: I0421 10:08:00.090425 2781 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:08:00.092381 kubelet[2781]: I0421 10:08:00.092361 2781 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:08:00.097254 kubelet[2781]: I0421 10:08:00.096776 2781 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:08:00.098324 kubelet[2781]: I0421 10:08:00.098304 2781 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:08:00.098324 kubelet[2781]: I0421 10:08:00.098324 2781 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:08:00.098413 kubelet[2781]: I0421 10:08:00.098345 2781 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:08:00.098413 kubelet[2781]: E0421 10:08:00.098384 2781 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:08:00.104568 kubelet[2781]: E0421 10:08:00.104537 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:08:00.104895 kubelet[2781]: E0421 10:08:00.104611 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:08:00.110844 kubelet[2781]: E0421 10:08:00.110820 2781 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:08:00.148883 kubelet[2781]: I0421 10:08:00.148857 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:08:00.148883 kubelet[2781]: I0421 10:08:00.148876 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:08:00.149030 kubelet[2781]: I0421 10:08:00.148901 2781 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:08:00.155116 kubelet[2781]: I0421 10:08:00.155087 2781 policy_none.go:49] "None policy: Start" Apr 21 10:08:00.155116 kubelet[2781]: I0421 10:08:00.155112 2781 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:08:00.155245 kubelet[2781]: I0421 10:08:00.155125 2781 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:08:00.159651 kubelet[2781]: I0421 10:08:00.159632 2781 policy_none.go:47] "Start" Apr 21 10:08:00.163157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:08:00.174332 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:08:00.176845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:08:00.186584 kubelet[2781]: E0421 10:08:00.186103 2781 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:08:00.186584 kubelet[2781]: I0421 10:08:00.186322 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:08:00.186584 kubelet[2781]: I0421 10:08:00.186333 2781 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:08:00.186584 kubelet[2781]: I0421 10:08:00.186577 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:08:00.189030 kubelet[2781]: E0421 10:08:00.189011 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:08:00.189253 kubelet[2781]: E0421 10:08:00.189241 2781 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.7-a-75af1c63bf\" not found" Apr 21 10:08:00.211259 systemd[1]: Created slice kubepods-burstable-pod64ea0c770619c8ecf59ea6672d35ede1.slice - libcontainer container kubepods-burstable-pod64ea0c770619c8ecf59ea6672d35ede1.slice. Apr 21 10:08:00.219128 kubelet[2781]: E0421 10:08:00.219090 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.223448 systemd[1]: Created slice kubepods-burstable-podfc740212c34fe0ac08dc411a76742f17.slice - libcontainer container kubepods-burstable-podfc740212c34fe0ac08dc411a76742f17.slice. Apr 21 10:08:00.226043 kubelet[2781]: E0421 10:08:00.226017 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.229350 systemd[1]: Created slice kubepods-burstable-podbecd79dd8c1e8d63b71b4c4099391cc7.slice - libcontainer container kubepods-burstable-podbecd79dd8c1e8d63b71b4c4099391cc7.slice. Apr 21 10:08:00.230865 kubelet[2781]: E0421 10:08:00.230710 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.288412 kubelet[2781]: I0421 10:08:00.288054 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.288640 kubelet[2781]: E0421 10:08:00.288621 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.289985 kubelet[2781]: E0421 10:08:00.289963 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-75af1c63bf?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Apr 21 10:08:00.392048 kubelet[2781]: I0421 10:08:00.391781 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392048 kubelet[2781]: I0421 10:08:00.391816 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392048 kubelet[2781]: I0421 10:08:00.391838 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392048 kubelet[2781]: I0421 10:08:00.391851 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392048 kubelet[2781]: I0421 10:08:00.391873 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392497 kubelet[2781]: I0421 10:08:00.391889 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/becd79dd8c1e8d63b71b4c4099391cc7-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-75af1c63bf\" (UID: \"becd79dd8c1e8d63b71b4c4099391cc7\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392497 kubelet[2781]: I0421 10:08:00.391904 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392497 kubelet[2781]: I0421 10:08:00.391918 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.392497 kubelet[2781]: I0421 10:08:00.391932 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.494228 kubelet[2781]: I0421 10:08:00.492017 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.494228 kubelet[2781]: E0421 10:08:00.492632 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.526979 containerd[1723]: time="2026-04-21T10:08:00.526941297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-75af1c63bf,Uid:64ea0c770619c8ecf59ea6672d35ede1,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:00.532406 containerd[1723]: time="2026-04-21T10:08:00.532372450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-75af1c63bf,Uid:fc740212c34fe0ac08dc411a76742f17,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:00.538595 containerd[1723]: time="2026-04-21T10:08:00.538562643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-75af1c63bf,Uid:becd79dd8c1e8d63b71b4c4099391cc7,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:00.690942 kubelet[2781]: E0421 10:08:00.690840 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-75af1c63bf?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Apr 21 10:08:00.894119 kubelet[2781]: I0421 10:08:00.894092 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:00.962276 kubelet[2781]: E0421 10:08:00.894423 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:01.177012 kubelet[2781]: E0421 10:08:01.176966 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:08:01.268160 kubelet[2781]: E0421 10:08:01.268046 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:08:01.383889 kubelet[2781]: E0421 10:08:01.383845 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-75af1c63bf&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:08:01.474045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673358424.mount: Deactivated successfully. Apr 21 10:08:01.492103 kubelet[2781]: E0421 10:08:01.492050 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-75af1c63bf?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Apr 21 10:08:01.504223 containerd[1723]: time="2026-04-21T10:08:01.502541329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:08:01.508021 containerd[1723]: time="2026-04-21T10:08:01.507981203Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:08:01.510944 containerd[1723]: time="2026-04-21T10:08:01.510917839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:08:01.513845 containerd[1723]: time="2026-04-21T10:08:01.513824036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 21 10:08:01.516896 containerd[1723]: time="2026-04-21T10:08:01.516859672Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:08:01.520476 containerd[1723]: time="2026-04-21T10:08:01.519731349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:08:01.520535 kubelet[2781]: E0421 10:08:01.520357 2781 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:08:01.523640 containerd[1723]: time="2026-04-21T10:08:01.523613544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:08:01.527489 containerd[1723]: time="2026-04-21T10:08:01.527449940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:08:01.528436 containerd[1723]: time="2026-04-21T10:08:01.528409218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 989.636055ms" Apr 21 10:08:01.529447 containerd[1723]: time="2026-04-21T10:08:01.529406657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 996.567967ms" Apr 21 10:08:01.535232 containerd[1723]: time="2026-04-21T10:08:01.535083170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.007697434s" Apr 21 10:08:01.696799 kubelet[2781]: I0421 10:08:01.696769 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:01.697088 kubelet[2781]: E0421 10:08:01.697057 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:01.770343 containerd[1723]: time="2026-04-21T10:08:01.770240009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:01.770343 containerd[1723]: time="2026-04-21T10:08:01.770306209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:01.770343 containerd[1723]: time="2026-04-21T10:08:01.770323449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.770681 containerd[1723]: time="2026-04-21T10:08:01.770399729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.776067 containerd[1723]: time="2026-04-21T10:08:01.775906642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:01.776067 containerd[1723]: time="2026-04-21T10:08:01.775964402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:01.776067 containerd[1723]: time="2026-04-21T10:08:01.775976842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.776326 containerd[1723]: time="2026-04-21T10:08:01.776045242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.778689 containerd[1723]: time="2026-04-21T10:08:01.778406759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:01.778689 containerd[1723]: time="2026-04-21T10:08:01.778453479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:01.778689 containerd[1723]: time="2026-04-21T10:08:01.778479359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.778689 containerd[1723]: time="2026-04-21T10:08:01.778548599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:01.796781 systemd[1]: Started cri-containerd-8725000c10c1cbe22eb4e33d732663ab7f5b69680f4dc1e9d3e0a6ab5c037a41.scope - libcontainer container 8725000c10c1cbe22eb4e33d732663ab7f5b69680f4dc1e9d3e0a6ab5c037a41. Apr 21 10:08:01.801098 systemd[1]: Started cri-containerd-8cd53cc743fa11420f8b9385a1553119c354cd6d23846258c15d493352d26ee7.scope - libcontainer container 8cd53cc743fa11420f8b9385a1553119c354cd6d23846258c15d493352d26ee7. Apr 21 10:08:01.809328 systemd[1]: Started cri-containerd-75edcc873193c9cc19406477c6eca2354868a497e0bd0f0e537508c892c8066b.scope - libcontainer container 75edcc873193c9cc19406477c6eca2354868a497e0bd0f0e537508c892c8066b. Apr 21 10:08:01.856538 containerd[1723]: time="2026-04-21T10:08:01.856294546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-75af1c63bf,Uid:64ea0c770619c8ecf59ea6672d35ede1,Namespace:kube-system,Attempt:0,} returns sandbox id \"75edcc873193c9cc19406477c6eca2354868a497e0bd0f0e537508c892c8066b\"" Apr 21 10:08:01.858207 containerd[1723]: time="2026-04-21T10:08:01.857864464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-75af1c63bf,Uid:becd79dd8c1e8d63b71b4c4099391cc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8725000c10c1cbe22eb4e33d732663ab7f5b69680f4dc1e9d3e0a6ab5c037a41\"" Apr 21 10:08:01.859931 containerd[1723]: time="2026-04-21T10:08:01.859650462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-75af1c63bf,Uid:fc740212c34fe0ac08dc411a76742f17,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cd53cc743fa11420f8b9385a1553119c354cd6d23846258c15d493352d26ee7\"" Apr 21 10:08:01.867073 containerd[1723]: time="2026-04-21T10:08:01.867016573Z" level=info msg="CreateContainer within sandbox \"8725000c10c1cbe22eb4e33d732663ab7f5b69680f4dc1e9d3e0a6ab5c037a41\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:08:01.871194 containerd[1723]: time="2026-04-21T10:08:01.871146608Z" level=info msg="CreateContainer within sandbox \"8cd53cc743fa11420f8b9385a1553119c354cd6d23846258c15d493352d26ee7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:08:01.876905 containerd[1723]: time="2026-04-21T10:08:01.876864041Z" level=info msg="CreateContainer within sandbox \"75edcc873193c9cc19406477c6eca2354868a497e0bd0f0e537508c892c8066b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:08:01.920591 containerd[1723]: time="2026-04-21T10:08:01.920438029Z" level=info msg="CreateContainer within sandbox \"8725000c10c1cbe22eb4e33d732663ab7f5b69680f4dc1e9d3e0a6ab5c037a41\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5be8a1fbfb6d0884c89536e2f3085eb870a02441f63ec626a0ee4900265f0d17\"" Apr 21 10:08:01.921381 containerd[1723]: time="2026-04-21T10:08:01.921210668Z" level=info msg="StartContainer for \"5be8a1fbfb6d0884c89536e2f3085eb870a02441f63ec626a0ee4900265f0d17\"" Apr 21 10:08:01.934183 containerd[1723]: time="2026-04-21T10:08:01.934138093Z" level=info msg="CreateContainer within sandbox \"75edcc873193c9cc19406477c6eca2354868a497e0bd0f0e537508c892c8066b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ab14bda72448809012852fa2994db3c26a05255db6ca383b16ba44e135fa55e\"" Apr 21 10:08:01.935129 containerd[1723]: time="2026-04-21T10:08:01.935101092Z" level=info msg="StartContainer for \"0ab14bda72448809012852fa2994db3c26a05255db6ca383b16ba44e135fa55e\"" Apr 21 10:08:01.936161 containerd[1723]: time="2026-04-21T10:08:01.936130411Z" level=info msg="CreateContainer within sandbox \"8cd53cc743fa11420f8b9385a1553119c354cd6d23846258c15d493352d26ee7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1bf7c9960e12320ed74b0554d6ebdbd4bf8664fa66f857180ccb35d5561968ad\"" Apr 21 10:08:01.937015 containerd[1723]: time="2026-04-21T10:08:01.936779810Z" level=info msg="StartContainer for \"1bf7c9960e12320ed74b0554d6ebdbd4bf8664fa66f857180ccb35d5561968ad\"" Apr 21 10:08:01.947700 systemd[1]: Started cri-containerd-5be8a1fbfb6d0884c89536e2f3085eb870a02441f63ec626a0ee4900265f0d17.scope - libcontainer container 5be8a1fbfb6d0884c89536e2f3085eb870a02441f63ec626a0ee4900265f0d17. Apr 21 10:08:01.968434 systemd[1]: Started cri-containerd-1bf7c9960e12320ed74b0554d6ebdbd4bf8664fa66f857180ccb35d5561968ad.scope - libcontainer container 1bf7c9960e12320ed74b0554d6ebdbd4bf8664fa66f857180ccb35d5561968ad. Apr 21 10:08:01.981363 systemd[1]: Started cri-containerd-0ab14bda72448809012852fa2994db3c26a05255db6ca383b16ba44e135fa55e.scope - libcontainer container 0ab14bda72448809012852fa2994db3c26a05255db6ca383b16ba44e135fa55e. Apr 21 10:08:02.012013 containerd[1723]: time="2026-04-21T10:08:02.011471440Z" level=info msg="StartContainer for \"5be8a1fbfb6d0884c89536e2f3085eb870a02441f63ec626a0ee4900265f0d17\" returns successfully" Apr 21 10:08:02.028321 containerd[1723]: time="2026-04-21T10:08:02.027434221Z" level=info msg="StartContainer for \"1bf7c9960e12320ed74b0554d6ebdbd4bf8664fa66f857180ccb35d5561968ad\" returns successfully" Apr 21 10:08:02.034814 containerd[1723]: time="2026-04-21T10:08:02.034651573Z" level=info msg="StartContainer for \"0ab14bda72448809012852fa2994db3c26a05255db6ca383b16ba44e135fa55e\" returns successfully" Apr 21 10:08:02.120854 kubelet[2781]: E0421 10:08:02.120664 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:02.124321 kubelet[2781]: E0421 10:08:02.124156 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:02.126011 kubelet[2781]: E0421 10:08:02.125995 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:03.130123 kubelet[2781]: E0421 10:08:03.129644 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:03.130123 kubelet[2781]: E0421 10:08:03.129965 2781 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:03.299463 kubelet[2781]: I0421 10:08:03.299437 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.037483 kubelet[2781]: E0421 10:08:04.037358 2781 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.7-a-75af1c63bf\" not found" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.070778 kubelet[2781]: I0421 10:08:04.070595 2781 apiserver.go:52] "Watching apiserver" Apr 21 10:08:04.091292 kubelet[2781]: I0421 10:08:04.091260 2781 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:08:04.207697 kubelet[2781]: I0421 10:08:04.207473 2781 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.288336 kubelet[2781]: I0421 10:08:04.287950 2781 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.295451 kubelet[2781]: E0421 10:08:04.295303 2781 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.295451 kubelet[2781]: I0421 10:08:04.295333 2781 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.298252 kubelet[2781]: E0421 10:08:04.298224 2781 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.298252 kubelet[2781]: I0421 10:08:04.298252 2781 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.299670 kubelet[2781]: E0421 10:08:04.299645 2781 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-75af1c63bf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.661263 kubelet[2781]: I0421 10:08:04.661015 2781 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:04.663224 kubelet[2781]: E0421 10:08:04.663119 2781 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:05.901752 systemd[1]: Reloading requested from client PID 3065 ('systemctl') (unit session-9.scope)... Apr 21 10:08:05.901766 systemd[1]: Reloading... Apr 21 10:08:05.981877 zram_generator::config[3103]: No configuration found. Apr 21 10:08:06.094276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:08:06.186165 systemd[1]: Reloading finished in 284 ms. Apr 21 10:08:06.217407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:08:06.237452 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:08:06.237663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:08:06.237716 systemd[1]: kubelet.service: Consumed 1.044s CPU time, 124.5M memory peak, 0B memory swap peak. Apr 21 10:08:06.251449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:08:06.352823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:08:06.365356 (kubelet)[3169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:08:06.401175 kubelet[3169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:08:06.401175 kubelet[3169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:08:06.401506 kubelet[3169]: I0421 10:08:06.401226 3169 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:08:06.406535 kubelet[3169]: I0421 10:08:06.406501 3169 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:08:06.406535 kubelet[3169]: I0421 10:08:06.406527 3169 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:08:06.406535 kubelet[3169]: I0421 10:08:06.406545 3169 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:08:06.406694 kubelet[3169]: I0421 10:08:06.406552 3169 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:08:06.406783 kubelet[3169]: I0421 10:08:06.406764 3169 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:08:06.410234 kubelet[3169]: I0421 10:08:06.408297 3169 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:08:06.411628 kubelet[3169]: I0421 10:08:06.411603 3169 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:08:06.414545 kubelet[3169]: E0421 10:08:06.414519 3169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:08:06.414677 kubelet[3169]: I0421 10:08:06.414666 3169 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:08:06.417523 kubelet[3169]: I0421 10:08:06.417505 3169 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:08:06.417839 kubelet[3169]: I0421 10:08:06.417813 3169 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:08:06.418050 kubelet[3169]: I0421 10:08:06.417902 3169 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-75af1c63bf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:08:06.418171 kubelet[3169]: I0421 10:08:06.418159 3169 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:08:06.418251 kubelet[3169]: I0421 10:08:06.418242 3169 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:08:06.418338 kubelet[3169]: I0421 10:08:06.418327 3169 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:08:06.418591 kubelet[3169]: I0421 10:08:06.418575 3169 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:08:06.418803 kubelet[3169]: I0421 10:08:06.418791 3169 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:08:06.418873 kubelet[3169]: I0421 10:08:06.418864 3169 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:08:06.418948 kubelet[3169]: I0421 10:08:06.418939 3169 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:08:06.418996 kubelet[3169]: I0421 10:08:06.418988 3169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:08:06.421518 kubelet[3169]: I0421 10:08:06.421496 3169 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:08:06.422073 kubelet[3169]: I0421 10:08:06.422045 3169 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:08:06.422073 kubelet[3169]: I0421 10:08:06.422074 3169 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:08:06.425864 kubelet[3169]: I0421 10:08:06.425847 3169 server.go:1262] "Started kubelet" Apr 21 10:08:06.431055 kubelet[3169]: I0421 10:08:06.431028 3169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:08:06.447650 kubelet[3169]: I0421 10:08:06.447490 3169 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:08:06.455245 kubelet[3169]: I0421 10:08:06.454985 3169 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:08:06.460469 kubelet[3169]: I0421 10:08:06.450854 3169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:08:06.460588 kubelet[3169]: I0421 10:08:06.448463 3169 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:08:06.461018 kubelet[3169]: I0421 10:08:06.460628 3169 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:08:06.461878 kubelet[3169]: E0421 10:08:06.451993 3169 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-75af1c63bf\" not found" Apr 21 10:08:06.463397 kubelet[3169]: I0421 10:08:06.451875 3169 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:08:06.463824 kubelet[3169]: I0421 10:08:06.451884 3169 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:08:06.464057 kubelet[3169]: I0421 10:08:06.464033 3169 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:08:06.466240 kubelet[3169]: I0421 10:08:06.466210 3169 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:08:06.470274 kubelet[3169]: I0421 10:08:06.470251 3169 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:08:06.471544 kubelet[3169]: I0421 10:08:06.470804 3169 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:08:06.472473 kubelet[3169]: I0421 10:08:06.470941 3169 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:08:06.482247 kubelet[3169]: I0421 10:08:06.481667 3169 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:08:06.486742 kubelet[3169]: I0421 10:08:06.486032 3169 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:08:06.486910 kubelet[3169]: I0421 10:08:06.486891 3169 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:08:06.486964 kubelet[3169]: I0421 10:08:06.486917 3169 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:08:06.488255 kubelet[3169]: E0421 10:08:06.488194 3169 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527222 3169 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527513 3169 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527538 3169 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527680 3169 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527690 3169 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527704 3169 policy_none.go:49] "None policy: Start" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527713 3169 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527722 3169 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527819 3169 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:08:06.528259 kubelet[3169]: I0421 10:08:06.527827 3169 policy_none.go:47] "Start" Apr 21 10:08:06.532391 kubelet[3169]: E0421 10:08:06.532368 3169 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:08:06.532967 kubelet[3169]: I0421 10:08:06.532948 3169 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:08:06.533085 kubelet[3169]: I0421 10:08:06.533054 3169 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:08:06.533394 kubelet[3169]: I0421 10:08:06.533373 3169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:08:06.534656 kubelet[3169]: E0421 10:08:06.534638 3169 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:08:06.589116 kubelet[3169]: I0421 10:08:06.589083 3169 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.589494 kubelet[3169]: I0421 10:08:06.589473 3169 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.589797 kubelet[3169]: I0421 10:08:06.589723 3169 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.596796 kubelet[3169]: I0421 10:08:06.596649 3169 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 10:08:06.600816 kubelet[3169]: I0421 10:08:06.600798 3169 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 10:08:06.601130 kubelet[3169]: I0421 10:08:06.601065 3169 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 10:08:06.635218 kubelet[3169]: I0421 10:08:06.635184 3169 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.649372 kubelet[3169]: I0421 10:08:06.649069 3169 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.649372 kubelet[3169]: I0421 10:08:06.649149 3169 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671413 kubelet[3169]: I0421 10:08:06.671166 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671413 kubelet[3169]: I0421 10:08:06.671219 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671413 kubelet[3169]: I0421 10:08:06.671241 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/becd79dd8c1e8d63b71b4c4099391cc7-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-75af1c63bf\" (UID: \"becd79dd8c1e8d63b71b4c4099391cc7\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671413 kubelet[3169]: I0421 10:08:06.671257 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671413 kubelet[3169]: I0421 10:08:06.671274 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671628 kubelet[3169]: I0421 10:08:06.671291 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671628 kubelet[3169]: I0421 10:08:06.671308 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671628 kubelet[3169]: I0421 10:08:06.671323 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc740212c34fe0ac08dc411a76742f17-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-75af1c63bf\" (UID: \"fc740212c34fe0ac08dc411a76742f17\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:06.671628 kubelet[3169]: I0421 10:08:06.671337 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ea0c770619c8ecf59ea6672d35ede1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" (UID: \"64ea0c770619c8ecf59ea6672d35ede1\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:07.424581 kubelet[3169]: I0421 10:08:07.424192 3169 apiserver.go:52] "Watching apiserver" Apr 21 10:08:07.464425 kubelet[3169]: I0421 10:08:07.464372 3169 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:08:07.513573 kubelet[3169]: I0421 10:08:07.513233 3169 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:07.516219 kubelet[3169]: I0421 10:08:07.515348 3169 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:07.527600 kubelet[3169]: I0421 10:08:07.527373 3169 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 10:08:07.527600 kubelet[3169]: E0421 10:08:07.527424 3169 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-75af1c63bf\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:07.528323 kubelet[3169]: I0421 10:08:07.528140 3169 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 10:08:07.528323 kubelet[3169]: E0421 10:08:07.528182 3169 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-75af1c63bf\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:07.535208 kubelet[3169]: I0421 10:08:07.535153 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.7-a-75af1c63bf" podStartSLOduration=1.5351413790000001 podStartE2EDuration="1.535141379s" podCreationTimestamp="2026-04-21 10:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:08:07.5341747 +0000 UTC m=+1.165687171" watchObservedRunningTime="2026-04-21 10:08:07.535141379 +0000 UTC m=+1.166653850" Apr 21 10:08:07.565183 kubelet[3169]: I0421 10:08:07.565079 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.7-a-75af1c63bf" podStartSLOduration=1.565063988 podStartE2EDuration="1.565063988s" podCreationTimestamp="2026-04-21 10:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:08:07.550232883 +0000 UTC m=+1.181745354" watchObservedRunningTime="2026-04-21 10:08:07.565063988 +0000 UTC m=+1.196576459" Apr 21 10:08:07.565906 kubelet[3169]: I0421 10:08:07.565580 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-75af1c63bf" podStartSLOduration=1.5655675869999999 podStartE2EDuration="1.565567587s" podCreationTimestamp="2026-04-21 10:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:08:07.565371307 +0000 UTC m=+1.196883778" watchObservedRunningTime="2026-04-21 10:08:07.565567587 +0000 UTC m=+1.197080058" Apr 21 10:08:12.314589 kubelet[3169]: I0421 10:08:12.314496 3169 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:08:12.315577 containerd[1723]: time="2026-04-21T10:08:12.315392642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:08:12.316217 kubelet[3169]: I0421 10:08:12.315570 3169 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:08:12.979833 systemd[1]: Created slice kubepods-besteffort-pod1717b7ca_da96_4b2b_b80e_7f5463a0907e.slice - libcontainer container kubepods-besteffort-pod1717b7ca_da96_4b2b_b80e_7f5463a0907e.slice. Apr 21 10:08:13.010413 kubelet[3169]: I0421 10:08:13.010373 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1717b7ca-da96-4b2b-b80e-7f5463a0907e-lib-modules\") pod \"kube-proxy-tkrp7\" (UID: \"1717b7ca-da96-4b2b-b80e-7f5463a0907e\") " pod="kube-system/kube-proxy-tkrp7" Apr 21 10:08:13.010413 kubelet[3169]: I0421 10:08:13.010414 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjmfd\" (UniqueName: \"kubernetes.io/projected/1717b7ca-da96-4b2b-b80e-7f5463a0907e-kube-api-access-pjmfd\") pod \"kube-proxy-tkrp7\" (UID: \"1717b7ca-da96-4b2b-b80e-7f5463a0907e\") " pod="kube-system/kube-proxy-tkrp7" Apr 21 10:08:13.010570 kubelet[3169]: I0421 10:08:13.010434 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1717b7ca-da96-4b2b-b80e-7f5463a0907e-kube-proxy\") pod \"kube-proxy-tkrp7\" (UID: \"1717b7ca-da96-4b2b-b80e-7f5463a0907e\") " pod="kube-system/kube-proxy-tkrp7" Apr 21 10:08:13.010570 kubelet[3169]: I0421 10:08:13.010450 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1717b7ca-da96-4b2b-b80e-7f5463a0907e-xtables-lock\") pod \"kube-proxy-tkrp7\" (UID: \"1717b7ca-da96-4b2b-b80e-7f5463a0907e\") " pod="kube-system/kube-proxy-tkrp7" Apr 21 10:08:13.120405 kubelet[3169]: E0421 10:08:13.120362 3169 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 10:08:13.120692 kubelet[3169]: E0421 10:08:13.120391 3169 projected.go:196] Error preparing data for projected volume kube-api-access-pjmfd for pod kube-system/kube-proxy-tkrp7: configmap "kube-root-ca.crt" not found Apr 21 10:08:13.120692 kubelet[3169]: E0421 10:08:13.120618 3169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1717b7ca-da96-4b2b-b80e-7f5463a0907e-kube-api-access-pjmfd podName:1717b7ca-da96-4b2b-b80e-7f5463a0907e nodeName:}" failed. No retries permitted until 2026-04-21 10:08:13.620584086 +0000 UTC m=+7.252096557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pjmfd" (UniqueName: "kubernetes.io/projected/1717b7ca-da96-4b2b-b80e-7f5463a0907e-kube-api-access-pjmfd") pod "kube-proxy-tkrp7" (UID: "1717b7ca-da96-4b2b-b80e-7f5463a0907e") : configmap "kube-root-ca.crt" not found Apr 21 10:08:13.551279 systemd[1]: Created slice kubepods-besteffort-pod43c6a726_3827_423b_9712_7e02b7bcabfa.slice - libcontainer container kubepods-besteffort-pod43c6a726_3827_423b_9712_7e02b7bcabfa.slice. Apr 21 10:08:13.615180 kubelet[3169]: I0421 10:08:13.615075 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z244m\" (UniqueName: \"kubernetes.io/projected/43c6a726-3827-423b-9712-7e02b7bcabfa-kube-api-access-z244m\") pod \"tigera-operator-5588576f44-psf7p\" (UID: \"43c6a726-3827-423b-9712-7e02b7bcabfa\") " pod="tigera-operator/tigera-operator-5588576f44-psf7p" Apr 21 10:08:13.615180 kubelet[3169]: I0421 10:08:13.615143 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43c6a726-3827-423b-9712-7e02b7bcabfa-var-lib-calico\") pod \"tigera-operator-5588576f44-psf7p\" (UID: \"43c6a726-3827-423b-9712-7e02b7bcabfa\") " pod="tigera-operator/tigera-operator-5588576f44-psf7p" Apr 21 10:08:13.863581 containerd[1723]: time="2026-04-21T10:08:13.863136128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-psf7p,Uid:43c6a726-3827-423b-9712-7e02b7bcabfa,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:08:13.893429 containerd[1723]: time="2026-04-21T10:08:13.893015011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkrp7,Uid:1717b7ca-da96-4b2b-b80e-7f5463a0907e,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:13.906680 containerd[1723]: time="2026-04-21T10:08:13.906598834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:13.906680 containerd[1723]: time="2026-04-21T10:08:13.906652474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:13.908131 containerd[1723]: time="2026-04-21T10:08:13.906798114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:13.908131 containerd[1723]: time="2026-04-21T10:08:13.906885674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:13.929392 systemd[1]: Started cri-containerd-81b75d666f4bf5b36dce3c75b00a9c5829db351dd4a8748ac92a0ec1e87ef0d5.scope - libcontainer container 81b75d666f4bf5b36dce3c75b00a9c5829db351dd4a8748ac92a0ec1e87ef0d5. Apr 21 10:08:13.936795 containerd[1723]: time="2026-04-21T10:08:13.936406317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:13.936795 containerd[1723]: time="2026-04-21T10:08:13.936465677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:13.936795 containerd[1723]: time="2026-04-21T10:08:13.936492157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:13.936795 containerd[1723]: time="2026-04-21T10:08:13.936594717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:13.956361 systemd[1]: Started cri-containerd-c0930a031a15ed09cb394504f42e491d4534449f6222117f1dd1862fd27382d8.scope - libcontainer container c0930a031a15ed09cb394504f42e491d4534449f6222117f1dd1862fd27382d8. Apr 21 10:08:13.969611 containerd[1723]: time="2026-04-21T10:08:13.969389836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-psf7p,Uid:43c6a726-3827-423b-9712-7e02b7bcabfa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81b75d666f4bf5b36dce3c75b00a9c5829db351dd4a8748ac92a0ec1e87ef0d5\"" Apr 21 10:08:13.975395 containerd[1723]: time="2026-04-21T10:08:13.975184949Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:08:13.986611 containerd[1723]: time="2026-04-21T10:08:13.986517455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkrp7,Uid:1717b7ca-da96-4b2b-b80e-7f5463a0907e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0930a031a15ed09cb394504f42e491d4534449f6222117f1dd1862fd27382d8\"" Apr 21 10:08:13.995736 containerd[1723]: time="2026-04-21T10:08:13.995611684Z" level=info msg="CreateContainer within sandbox \"c0930a031a15ed09cb394504f42e491d4534449f6222117f1dd1862fd27382d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:08:14.033896 containerd[1723]: time="2026-04-21T10:08:14.033846597Z" level=info msg="CreateContainer within sandbox \"c0930a031a15ed09cb394504f42e491d4534449f6222117f1dd1862fd27382d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5157ed89d64000fbfebfddc90fe22d6cf8717836dfa8543610a274a67a0fe10d\"" Apr 21 10:08:14.036212 containerd[1723]: time="2026-04-21T10:08:14.036168194Z" level=info msg="StartContainer for \"5157ed89d64000fbfebfddc90fe22d6cf8717836dfa8543610a274a67a0fe10d\"" Apr 21 10:08:14.060416 systemd[1]: Started cri-containerd-5157ed89d64000fbfebfddc90fe22d6cf8717836dfa8543610a274a67a0fe10d.scope - libcontainer container 5157ed89d64000fbfebfddc90fe22d6cf8717836dfa8543610a274a67a0fe10d. Apr 21 10:08:14.089832 containerd[1723]: time="2026-04-21T10:08:14.089694367Z" level=info msg="StartContainer for \"5157ed89d64000fbfebfddc90fe22d6cf8717836dfa8543610a274a67a0fe10d\" returns successfully" Apr 21 10:08:14.549639 kubelet[3169]: I0421 10:08:14.549567 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tkrp7" podStartSLOduration=2.549550559 podStartE2EDuration="2.549550559s" podCreationTimestamp="2026-04-21 10:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:08:14.538100013 +0000 UTC m=+8.169612524" watchObservedRunningTime="2026-04-21 10:08:14.549550559 +0000 UTC m=+8.181063030" Apr 21 10:08:15.415274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981176978.mount: Deactivated successfully. Apr 21 10:08:16.090609 containerd[1723]: time="2026-04-21T10:08:16.090557612Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:16.092784 containerd[1723]: time="2026-04-21T10:08:16.092748850Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 21 10:08:16.095524 containerd[1723]: time="2026-04-21T10:08:16.095342206Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:16.099679 containerd[1723]: time="2026-04-21T10:08:16.099630881Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:16.100582 containerd[1723]: time="2026-04-21T10:08:16.100283080Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.125049411s" Apr 21 10:08:16.100582 containerd[1723]: time="2026-04-21T10:08:16.100313920Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 21 10:08:16.106889 containerd[1723]: time="2026-04-21T10:08:16.106853352Z" level=info msg="CreateContainer within sandbox \"81b75d666f4bf5b36dce3c75b00a9c5829db351dd4a8748ac92a0ec1e87ef0d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:08:16.149844 containerd[1723]: time="2026-04-21T10:08:16.149798219Z" level=info msg="CreateContainer within sandbox \"81b75d666f4bf5b36dce3c75b00a9c5829db351dd4a8748ac92a0ec1e87ef0d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be\"" Apr 21 10:08:16.150518 containerd[1723]: time="2026-04-21T10:08:16.150408178Z" level=info msg="StartContainer for \"44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be\"" Apr 21 10:08:16.186453 systemd[1]: Started cri-containerd-44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be.scope - libcontainer container 44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be. Apr 21 10:08:16.211766 containerd[1723]: time="2026-04-21T10:08:16.211561903Z" level=info msg="StartContainer for \"44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be\" returns successfully" Apr 21 10:08:17.130627 systemd[1]: run-containerd-runc-k8s.io-44d614ea76b708959fce298f2de7132b6fe760b26c1f1cf0a1168355b5fd01be-runc.DerId4.mount: Deactivated successfully. Apr 21 10:08:21.929026 sudo[2193]: pam_unix(sudo:session): session closed for user root Apr 21 10:08:21.995240 kubelet[3169]: I0421 10:08:21.995176 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-psf7p" podStartSLOduration=6.866792038 podStartE2EDuration="8.995158285s" podCreationTimestamp="2026-04-21 10:08:13 +0000 UTC" firstStartedPulling="2026-04-21 10:08:13.972820952 +0000 UTC m=+7.604333423" lastFinishedPulling="2026-04-21 10:08:16.101187239 +0000 UTC m=+9.732699670" observedRunningTime="2026-04-21 10:08:16.55329804 +0000 UTC m=+10.184810511" watchObservedRunningTime="2026-04-21 10:08:21.995158285 +0000 UTC m=+15.626670796" Apr 21 10:08:22.080459 sshd[2190]: pam_unix(sshd:session): session closed for user core Apr 21 10:08:22.084477 systemd-logind[1682]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:08:22.085482 systemd[1]: sshd@6-10.0.0.5:22-20.229.252.112:36878.service: Deactivated successfully. Apr 21 10:08:22.088136 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:08:22.088369 systemd[1]: session-9.scope: Consumed 6.896s CPU time, 152.9M memory peak, 0B memory swap peak. Apr 21 10:08:22.090828 systemd-logind[1682]: Removed session 9. Apr 21 10:08:32.006929 systemd[1]: Created slice kubepods-besteffort-poda30f8a87_8880_4477_91df_326c962bc4ad.slice - libcontainer container kubepods-besteffort-poda30f8a87_8880_4477_91df_326c962bc4ad.slice. Apr 21 10:08:32.108256 systemd[1]: Created slice kubepods-besteffort-podcf1688b7_1cb2_4c37_bdc8_811a6bb5254a.slice - libcontainer container kubepods-besteffort-podcf1688b7_1cb2_4c37_bdc8_811a6bb5254a.slice. Apr 21 10:08:32.127769 kubelet[3169]: I0421 10:08:32.127705 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4g2\" (UniqueName: \"kubernetes.io/projected/a30f8a87-8880-4477-91df-326c962bc4ad-kube-api-access-nw4g2\") pod \"calico-typha-7b9f468549-8ctvz\" (UID: \"a30f8a87-8880-4477-91df-326c962bc4ad\") " pod="calico-system/calico-typha-7b9f468549-8ctvz" Apr 21 10:08:32.127769 kubelet[3169]: I0421 10:08:32.127773 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-tigera-ca-bundle\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128158 kubelet[3169]: I0421 10:08:32.127793 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-policysync\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128158 kubelet[3169]: I0421 10:08:32.127808 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30f8a87-8880-4477-91df-326c962bc4ad-tigera-ca-bundle\") pod \"calico-typha-7b9f468549-8ctvz\" (UID: \"a30f8a87-8880-4477-91df-326c962bc4ad\") " pod="calico-system/calico-typha-7b9f468549-8ctvz" Apr 21 10:08:32.128158 kubelet[3169]: I0421 10:08:32.127825 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-lib-modules\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128158 kubelet[3169]: I0421 10:08:32.127845 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-nodeproc\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128158 kubelet[3169]: I0421 10:08:32.127859 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-var-run-calico\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128288 kubelet[3169]: I0421 10:08:32.127872 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-bpffs\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128288 kubelet[3169]: I0421 10:08:32.127884 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-node-certs\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128288 kubelet[3169]: I0421 10:08:32.127897 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-var-lib-calico\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128288 kubelet[3169]: I0421 10:08:32.127912 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-cni-bin-dir\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128288 kubelet[3169]: I0421 10:08:32.127925 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-cni-log-dir\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128392 kubelet[3169]: I0421 10:08:32.127941 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-cni-net-dir\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128392 kubelet[3169]: I0421 10:08:32.127954 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-xtables-lock\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128392 kubelet[3169]: I0421 10:08:32.127971 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lkjl\" (UniqueName: \"kubernetes.io/projected/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-kube-api-access-8lkjl\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128392 kubelet[3169]: I0421 10:08:32.127985 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a30f8a87-8880-4477-91df-326c962bc4ad-typha-certs\") pod \"calico-typha-7b9f468549-8ctvz\" (UID: \"a30f8a87-8880-4477-91df-326c962bc4ad\") " pod="calico-system/calico-typha-7b9f468549-8ctvz" Apr 21 10:08:32.128392 kubelet[3169]: I0421 10:08:32.127999 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-flexvol-driver-host\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.128499 kubelet[3169]: I0421 10:08:32.128014 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/cf1688b7-1cb2-4c37-bdc8-811a6bb5254a-sys-fs\") pod \"calico-node-zm2d5\" (UID: \"cf1688b7-1cb2-4c37-bdc8-811a6bb5254a\") " pod="calico-system/calico-node-zm2d5" Apr 21 10:08:32.209134 kubelet[3169]: E0421 10:08:32.208974 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:32.229306 kubelet[3169]: I0421 10:08:32.228843 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca3b0e51-e445-4caa-9b05-d450087178fc-kubelet-dir\") pod \"csi-node-driver-qvhct\" (UID: \"ca3b0e51-e445-4caa-9b05-d450087178fc\") " pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:32.229306 kubelet[3169]: I0421 10:08:32.228885 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca3b0e51-e445-4caa-9b05-d450087178fc-registration-dir\") pod \"csi-node-driver-qvhct\" (UID: \"ca3b0e51-e445-4caa-9b05-d450087178fc\") " pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:32.229306 kubelet[3169]: I0421 10:08:32.228945 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfwd\" (UniqueName: \"kubernetes.io/projected/ca3b0e51-e445-4caa-9b05-d450087178fc-kube-api-access-tgfwd\") pod \"csi-node-driver-qvhct\" (UID: \"ca3b0e51-e445-4caa-9b05-d450087178fc\") " pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:32.229306 kubelet[3169]: I0421 10:08:32.228990 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca3b0e51-e445-4caa-9b05-d450087178fc-varrun\") pod \"csi-node-driver-qvhct\" (UID: \"ca3b0e51-e445-4caa-9b05-d450087178fc\") " pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:32.229306 kubelet[3169]: I0421 10:08:32.229086 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca3b0e51-e445-4caa-9b05-d450087178fc-socket-dir\") pod \"csi-node-driver-qvhct\" (UID: \"ca3b0e51-e445-4caa-9b05-d450087178fc\") " pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:32.258283 kubelet[3169]: E0421 10:08:32.256184 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.258283 kubelet[3169]: W0421 10:08:32.256340 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.258283 kubelet[3169]: E0421 10:08:32.256373 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.263839 kubelet[3169]: E0421 10:08:32.263799 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.263839 kubelet[3169]: W0421 10:08:32.263830 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.263969 kubelet[3169]: E0421 10:08:32.263849 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.264158 kubelet[3169]: E0421 10:08:32.264078 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.264158 kubelet[3169]: W0421 10:08:32.264092 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.264158 kubelet[3169]: E0421 10:08:32.264101 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.264907 kubelet[3169]: E0421 10:08:32.264886 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.264907 kubelet[3169]: W0421 10:08:32.264902 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.264996 kubelet[3169]: E0421 10:08:32.264916 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.265432 kubelet[3169]: E0421 10:08:32.265413 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.265432 kubelet[3169]: W0421 10:08:32.265429 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.265645 kubelet[3169]: E0421 10:08:32.265442 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.266021 kubelet[3169]: E0421 10:08:32.266002 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.266021 kubelet[3169]: W0421 10:08:32.266019 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.266090 kubelet[3169]: E0421 10:08:32.266031 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.266311 kubelet[3169]: E0421 10:08:32.266296 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.266311 kubelet[3169]: W0421 10:08:32.266309 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.266393 kubelet[3169]: E0421 10:08:32.266320 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.302572 kubelet[3169]: E0421 10:08:32.297141 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.302572 kubelet[3169]: W0421 10:08:32.297163 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.302572 kubelet[3169]: E0421 10:08:32.297181 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.315288 kubelet[3169]: E0421 10:08:32.311298 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.317191 kubelet[3169]: W0421 10:08:32.315391 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.317191 kubelet[3169]: E0421 10:08:32.315425 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.319955 containerd[1723]: time="2026-04-21T10:08:32.319924985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f468549-8ctvz,Uid:a30f8a87-8880-4477-91df-326c962bc4ad,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:32.329701 kubelet[3169]: E0421 10:08:32.329543 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.329701 kubelet[3169]: W0421 10:08:32.329567 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.329701 kubelet[3169]: E0421 10:08:32.329587 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.329997 kubelet[3169]: E0421 10:08:32.329789 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.329997 kubelet[3169]: W0421 10:08:32.329797 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.329997 kubelet[3169]: E0421 10:08:32.329806 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.330073 kubelet[3169]: E0421 10:08:32.330034 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.330073 kubelet[3169]: W0421 10:08:32.330046 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.330073 kubelet[3169]: E0421 10:08:32.330065 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.331531 kubelet[3169]: E0421 10:08:32.331499 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.331531 kubelet[3169]: W0421 10:08:32.331525 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.331653 kubelet[3169]: E0421 10:08:32.331545 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.331737 kubelet[3169]: E0421 10:08:32.331721 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.331737 kubelet[3169]: W0421 10:08:32.331733 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.331797 kubelet[3169]: E0421 10:08:32.331742 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.331957 kubelet[3169]: E0421 10:08:32.331945 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.331957 kubelet[3169]: W0421 10:08:32.331955 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.331965 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332488 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.333403 kubelet[3169]: W0421 10:08:32.332499 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332512 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332685 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.333403 kubelet[3169]: W0421 10:08:32.332693 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332703 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332862 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.333403 kubelet[3169]: W0421 10:08:32.332870 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.333403 kubelet[3169]: E0421 10:08:32.332878 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.333690 kubelet[3169]: E0421 10:08:32.333055 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.333690 kubelet[3169]: W0421 10:08:32.333063 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.333690 kubelet[3169]: E0421 10:08:32.333071 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.334346 kubelet[3169]: E0421 10:08:32.334307 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.334346 kubelet[3169]: W0421 10:08:32.334330 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.334346 kubelet[3169]: E0421 10:08:32.334343 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.334652 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336381 kubelet[3169]: W0421 10:08:32.334663 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.334673 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.334884 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336381 kubelet[3169]: W0421 10:08:32.334895 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.334904 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.335132 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336381 kubelet[3169]: W0421 10:08:32.335143 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.335153 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336381 kubelet[3169]: E0421 10:08:32.335373 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336597 kubelet[3169]: W0421 10:08:32.335384 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.335393 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.335556 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336597 kubelet[3169]: W0421 10:08:32.335564 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.335578 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.335762 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336597 kubelet[3169]: W0421 10:08:32.335770 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.335780 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.336597 kubelet[3169]: E0421 10:08:32.336129 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.336597 kubelet[3169]: W0421 10:08:32.336143 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.336807 kubelet[3169]: E0421 10:08:32.336164 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.337627 kubelet[3169]: E0421 10:08:32.337456 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.337627 kubelet[3169]: W0421 10:08:32.337475 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.337627 kubelet[3169]: E0421 10:08:32.337496 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.338642 kubelet[3169]: E0421 10:08:32.338624 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.339331 kubelet[3169]: W0421 10:08:32.339145 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.339331 kubelet[3169]: E0421 10:08:32.339173 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.339970 kubelet[3169]: E0421 10:08:32.339839 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.339970 kubelet[3169]: W0421 10:08:32.339855 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.339970 kubelet[3169]: E0421 10:08:32.339869 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.340306 kubelet[3169]: E0421 10:08:32.340292 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.340583 kubelet[3169]: W0421 10:08:32.340562 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.340692 kubelet[3169]: E0421 10:08:32.340678 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.341281 kubelet[3169]: E0421 10:08:32.341265 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.341369 kubelet[3169]: W0421 10:08:32.341358 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.341428 kubelet[3169]: E0421 10:08:32.341417 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.342372 kubelet[3169]: E0421 10:08:32.342347 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.342372 kubelet[3169]: W0421 10:08:32.342366 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.342475 kubelet[3169]: E0421 10:08:32.342380 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.342669 kubelet[3169]: E0421 10:08:32.342599 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.342669 kubelet[3169]: W0421 10:08:32.342616 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.342669 kubelet[3169]: E0421 10:08:32.342626 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.361466 kubelet[3169]: E0421 10:08:32.361434 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:32.361466 kubelet[3169]: W0421 10:08:32.361457 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:32.362371 kubelet[3169]: E0421 10:08:32.361949 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:32.374383 containerd[1723]: time="2026-04-21T10:08:32.374307157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:32.374542 containerd[1723]: time="2026-04-21T10:08:32.374404157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:32.374542 containerd[1723]: time="2026-04-21T10:08:32.374443877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:32.374666 containerd[1723]: time="2026-04-21T10:08:32.374552237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:32.401816 systemd[1]: Started cri-containerd-c84981f06abbf7b6f4ffcf0bdb95d3237f699808b945e1413cd05df47d3938a6.scope - libcontainer container c84981f06abbf7b6f4ffcf0bdb95d3237f699808b945e1413cd05df47d3938a6. Apr 21 10:08:32.418898 containerd[1723]: time="2026-04-21T10:08:32.417792702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm2d5,Uid:cf1688b7-1cb2-4c37-bdc8-811a6bb5254a,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:32.434530 containerd[1723]: time="2026-04-21T10:08:32.434486922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f468549-8ctvz,Uid:a30f8a87-8880-4477-91df-326c962bc4ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"c84981f06abbf7b6f4ffcf0bdb95d3237f699808b945e1413cd05df47d3938a6\"" Apr 21 10:08:32.436052 containerd[1723]: time="2026-04-21T10:08:32.436023080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:08:32.459462 containerd[1723]: time="2026-04-21T10:08:32.459328010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:32.459462 containerd[1723]: time="2026-04-21T10:08:32.459378850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:32.459462 containerd[1723]: time="2026-04-21T10:08:32.459389250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:32.459759 containerd[1723]: time="2026-04-21T10:08:32.459474970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:32.473356 systemd[1]: Started cri-containerd-aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a.scope - libcontainer container aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a. Apr 21 10:08:32.501178 containerd[1723]: time="2026-04-21T10:08:32.501072878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm2d5,Uid:cf1688b7-1cb2-4c37-bdc8-811a6bb5254a,Namespace:calico-system,Attempt:0,} returns sandbox id \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\"" Apr 21 10:08:33.704619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099322777.mount: Deactivated successfully. Apr 21 10:08:34.335344 containerd[1723]: time="2026-04-21T10:08:34.335291936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:34.338860 containerd[1723]: time="2026-04-21T10:08:34.338694972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 21 10:08:34.341964 containerd[1723]: time="2026-04-21T10:08:34.341934288Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:34.347278 containerd[1723]: time="2026-04-21T10:08:34.346644522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:34.347425 containerd[1723]: time="2026-04-21T10:08:34.347401441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.911128042s" Apr 21 10:08:34.347522 containerd[1723]: time="2026-04-21T10:08:34.347506961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 21 10:08:34.349061 containerd[1723]: time="2026-04-21T10:08:34.349037599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:08:34.368235 containerd[1723]: time="2026-04-21T10:08:34.368182976Z" level=info msg="CreateContainer within sandbox \"c84981f06abbf7b6f4ffcf0bdb95d3237f699808b945e1413cd05df47d3938a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:08:34.390994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941342306.mount: Deactivated successfully. Apr 21 10:08:34.402189 containerd[1723]: time="2026-04-21T10:08:34.402073934Z" level=info msg="CreateContainer within sandbox \"c84981f06abbf7b6f4ffcf0bdb95d3237f699808b945e1413cd05df47d3938a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5111b10fae4147b5a5835c1258d26b9a43477b755ca5875edb0f7a1f883c67b\"" Apr 21 10:08:34.403152 containerd[1723]: time="2026-04-21T10:08:34.403109813Z" level=info msg="StartContainer for \"e5111b10fae4147b5a5835c1258d26b9a43477b755ca5875edb0f7a1f883c67b\"" Apr 21 10:08:34.429352 systemd[1]: Started cri-containerd-e5111b10fae4147b5a5835c1258d26b9a43477b755ca5875edb0f7a1f883c67b.scope - libcontainer container e5111b10fae4147b5a5835c1258d26b9a43477b755ca5875edb0f7a1f883c67b. Apr 21 10:08:34.464659 containerd[1723]: time="2026-04-21T10:08:34.464331778Z" level=info msg="StartContainer for \"e5111b10fae4147b5a5835c1258d26b9a43477b755ca5875edb0f7a1f883c67b\" returns successfully" Apr 21 10:08:34.489632 kubelet[3169]: E0421 10:08:34.488469 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:34.642730 kubelet[3169]: E0421 10:08:34.642507 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.642730 kubelet[3169]: W0421 10:08:34.642530 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.642730 kubelet[3169]: E0421 10:08:34.642549 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.645638 kubelet[3169]: E0421 10:08:34.645299 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.645638 kubelet[3169]: W0421 10:08:34.645325 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.645638 kubelet[3169]: E0421 10:08:34.645374 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.646130 kubelet[3169]: E0421 10:08:34.645855 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.646130 kubelet[3169]: W0421 10:08:34.645874 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.646130 kubelet[3169]: E0421 10:08:34.645886 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.646624 kubelet[3169]: E0421 10:08:34.646460 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.646624 kubelet[3169]: W0421 10:08:34.646474 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.646624 kubelet[3169]: E0421 10:08:34.646485 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.646871 kubelet[3169]: E0421 10:08:34.646779 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.646871 kubelet[3169]: W0421 10:08:34.646791 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.646871 kubelet[3169]: E0421 10:08:34.646801 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.647190 kubelet[3169]: E0421 10:08:34.647062 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.647190 kubelet[3169]: W0421 10:08:34.647072 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.647190 kubelet[3169]: E0421 10:08:34.647081 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.647463 kubelet[3169]: E0421 10:08:34.647367 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.647463 kubelet[3169]: W0421 10:08:34.647381 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.647463 kubelet[3169]: E0421 10:08:34.647391 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.647632 kubelet[3169]: E0421 10:08:34.647621 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.647740 kubelet[3169]: W0421 10:08:34.647680 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.647740 kubelet[3169]: E0421 10:08:34.647694 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.648074 kubelet[3169]: E0421 10:08:34.647981 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.648074 kubelet[3169]: W0421 10:08:34.647993 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.648074 kubelet[3169]: E0421 10:08:34.648003 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.648257 kubelet[3169]: E0421 10:08:34.648246 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.648360 kubelet[3169]: W0421 10:08:34.648310 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.648360 kubelet[3169]: E0421 10:08:34.648325 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.648665 kubelet[3169]: E0421 10:08:34.648605 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.648665 kubelet[3169]: W0421 10:08:34.648616 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.648665 kubelet[3169]: E0421 10:08:34.648626 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.648988 kubelet[3169]: E0421 10:08:34.648922 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.648988 kubelet[3169]: W0421 10:08:34.648932 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.648988 kubelet[3169]: E0421 10:08:34.648942 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.650264 kubelet[3169]: E0421 10:08:34.650135 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.650264 kubelet[3169]: W0421 10:08:34.650150 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.650264 kubelet[3169]: E0421 10:08:34.650182 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.650621 kubelet[3169]: E0421 10:08:34.650534 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.650621 kubelet[3169]: W0421 10:08:34.650546 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.650621 kubelet[3169]: E0421 10:08:34.650556 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.651034 kubelet[3169]: E0421 10:08:34.650836 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.651034 kubelet[3169]: W0421 10:08:34.650847 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.651034 kubelet[3169]: E0421 10:08:34.650857 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.651285 kubelet[3169]: E0421 10:08:34.651273 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.651428 kubelet[3169]: W0421 10:08:34.651345 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.651428 kubelet[3169]: E0421 10:08:34.651361 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.651760 kubelet[3169]: E0421 10:08:34.651696 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.651760 kubelet[3169]: W0421 10:08:34.651707 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.651760 kubelet[3169]: E0421 10:08:34.651718 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.651991 kubelet[3169]: E0421 10:08:34.651974 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.651991 kubelet[3169]: W0421 10:08:34.651989 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.652156 kubelet[3169]: E0421 10:08:34.652002 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.652721 kubelet[3169]: E0421 10:08:34.652696 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.652721 kubelet[3169]: W0421 10:08:34.652715 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.652934 kubelet[3169]: E0421 10:08:34.652731 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.653329 kubelet[3169]: E0421 10:08:34.653307 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.653329 kubelet[3169]: W0421 10:08:34.653322 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.653482 kubelet[3169]: E0421 10:08:34.653334 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.653959 kubelet[3169]: E0421 10:08:34.653939 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.654036 kubelet[3169]: W0421 10:08:34.653978 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.654036 kubelet[3169]: E0421 10:08:34.653992 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.655298 kubelet[3169]: E0421 10:08:34.655274 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.655298 kubelet[3169]: W0421 10:08:34.655289 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.655605 kubelet[3169]: E0421 10:08:34.655304 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.655726 kubelet[3169]: E0421 10:08:34.655710 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.655726 kubelet[3169]: W0421 10:08:34.655724 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.656077 kubelet[3169]: E0421 10:08:34.655736 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.656713 kubelet[3169]: E0421 10:08:34.656690 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.656713 kubelet[3169]: W0421 10:08:34.656708 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.656932 kubelet[3169]: E0421 10:08:34.656722 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.657374 kubelet[3169]: E0421 10:08:34.657352 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.657374 kubelet[3169]: W0421 10:08:34.657369 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.657579 kubelet[3169]: E0421 10:08:34.657382 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.658299 kubelet[3169]: E0421 10:08:34.658281 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.658299 kubelet[3169]: W0421 10:08:34.658296 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.658448 kubelet[3169]: E0421 10:08:34.658309 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.658596 kubelet[3169]: E0421 10:08:34.658582 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.658643 kubelet[3169]: W0421 10:08:34.658595 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.658643 kubelet[3169]: E0421 10:08:34.658607 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.658850 kubelet[3169]: E0421 10:08:34.658834 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.658850 kubelet[3169]: W0421 10:08:34.658846 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.658936 kubelet[3169]: E0421 10:08:34.658857 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.659622 kubelet[3169]: E0421 10:08:34.659601 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.659622 kubelet[3169]: W0421 10:08:34.659617 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.659717 kubelet[3169]: E0421 10:08:34.659631 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.661347 kubelet[3169]: E0421 10:08:34.661321 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.661347 kubelet[3169]: W0421 10:08:34.661341 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.661507 kubelet[3169]: E0421 10:08:34.661356 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.661667 kubelet[3169]: E0421 10:08:34.661652 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.661667 kubelet[3169]: W0421 10:08:34.661665 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.661738 kubelet[3169]: E0421 10:08:34.661676 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.661884 kubelet[3169]: E0421 10:08:34.661870 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.661919 kubelet[3169]: W0421 10:08:34.661885 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.661919 kubelet[3169]: E0421 10:08:34.661895 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:34.663818 kubelet[3169]: E0421 10:08:34.663295 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:34.663818 kubelet[3169]: W0421 10:08:34.663343 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:34.663818 kubelet[3169]: E0421 10:08:34.663358 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.552412 containerd[1723]: time="2026-04-21T10:08:35.552364368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:35.559932 containerd[1723]: time="2026-04-21T10:08:35.559882798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 21 10:08:35.564332 containerd[1723]: time="2026-04-21T10:08:35.563336234Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:35.567739 containerd[1723]: time="2026-04-21T10:08:35.567711989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:35.568607 containerd[1723]: time="2026-04-21T10:08:35.568274708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.218931029s" Apr 21 10:08:35.568701 containerd[1723]: time="2026-04-21T10:08:35.568686188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 21 10:08:35.575800 containerd[1723]: time="2026-04-21T10:08:35.575768539Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:08:35.576185 kubelet[3169]: I0421 10:08:35.576159 3169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:08:35.608894 containerd[1723]: time="2026-04-21T10:08:35.608848939Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d\"" Apr 21 10:08:35.610095 containerd[1723]: time="2026-04-21T10:08:35.610062257Z" level=info msg="StartContainer for \"12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d\"" Apr 21 10:08:35.638344 systemd[1]: Started cri-containerd-12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d.scope - libcontainer container 12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d. Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657000 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658236 kubelet[3169]: W0421 10:08:35.657025 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657057 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657271 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658236 kubelet[3169]: W0421 10:08:35.657279 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657288 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657514 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658236 kubelet[3169]: W0421 10:08:35.657522 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657542 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658236 kubelet[3169]: E0421 10:08:35.657715 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658555 kubelet[3169]: W0421 10:08:35.657725 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.657733 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.657933 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658555 kubelet[3169]: W0421 10:08:35.657942 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.657950 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.658113 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658555 kubelet[3169]: W0421 10:08:35.658121 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.658130 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658555 kubelet[3169]: E0421 10:08:35.658312 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658555 kubelet[3169]: W0421 10:08:35.658320 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658757 kubelet[3169]: E0421 10:08:35.658328 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658757 kubelet[3169]: E0421 10:08:35.658492 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658757 kubelet[3169]: W0421 10:08:35.658499 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658757 kubelet[3169]: E0421 10:08:35.658507 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658757 kubelet[3169]: E0421 10:08:35.658705 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658757 kubelet[3169]: W0421 10:08:35.658713 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658757 kubelet[3169]: E0421 10:08:35.658721 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.658892 kubelet[3169]: E0421 10:08:35.658880 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.658892 kubelet[3169]: W0421 10:08:35.658887 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.658934 kubelet[3169]: E0421 10:08:35.658895 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.659093 kubelet[3169]: E0421 10:08:35.659080 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.659093 kubelet[3169]: W0421 10:08:35.659092 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.659194 kubelet[3169]: E0421 10:08:35.659101 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.659312 kubelet[3169]: E0421 10:08:35.659286 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.659312 kubelet[3169]: W0421 10:08:35.659310 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.659366 kubelet[3169]: E0421 10:08:35.659321 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.659563 kubelet[3169]: E0421 10:08:35.659545 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.659563 kubelet[3169]: W0421 10:08:35.659557 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.659631 kubelet[3169]: E0421 10:08:35.659566 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.659841 kubelet[3169]: E0421 10:08:35.659724 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.659841 kubelet[3169]: W0421 10:08:35.659736 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.659841 kubelet[3169]: E0421 10:08:35.659743 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.660054 kubelet[3169]: E0421 10:08:35.660045 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.660096 kubelet[3169]: W0421 10:08:35.660055 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.660096 kubelet[3169]: E0421 10:08:35.660065 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.661404 kubelet[3169]: E0421 10:08:35.661256 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.661404 kubelet[3169]: W0421 10:08:35.661273 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.661404 kubelet[3169]: E0421 10:08:35.661286 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.662003 kubelet[3169]: E0421 10:08:35.661844 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.662003 kubelet[3169]: W0421 10:08:35.661857 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.662003 kubelet[3169]: E0421 10:08:35.661869 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.662362 kubelet[3169]: E0421 10:08:35.662226 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.662362 kubelet[3169]: W0421 10:08:35.662238 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.662362 kubelet[3169]: E0421 10:08:35.662249 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.662882 kubelet[3169]: E0421 10:08:35.662785 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.662882 kubelet[3169]: W0421 10:08:35.662796 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.662882 kubelet[3169]: E0421 10:08:35.662807 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.663593 kubelet[3169]: E0421 10:08:35.663580 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.663763 kubelet[3169]: W0421 10:08:35.663663 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.663763 kubelet[3169]: E0421 10:08:35.663682 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.664062 kubelet[3169]: E0421 10:08:35.663932 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.664062 kubelet[3169]: W0421 10:08:35.663944 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.664062 kubelet[3169]: E0421 10:08:35.663955 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.665050 kubelet[3169]: E0421 10:08:35.664938 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.665050 kubelet[3169]: W0421 10:08:35.664951 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.665050 kubelet[3169]: E0421 10:08:35.664963 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.665256 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.666830 kubelet[3169]: W0421 10:08:35.665266 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.665276 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.665625 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.666830 kubelet[3169]: W0421 10:08:35.665635 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.665660 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.666193 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.666830 kubelet[3169]: W0421 10:08:35.666322 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.666830 kubelet[3169]: E0421 10:08:35.666345 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.667568 kubelet[3169]: E0421 10:08:35.667257 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.667568 kubelet[3169]: W0421 10:08:35.667268 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.667568 kubelet[3169]: E0421 10:08:35.667496 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.667919 containerd[1723]: time="2026-04-21T10:08:35.667696267Z" level=info msg="StartContainer for \"12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d\" returns successfully" Apr 21 10:08:35.668282 kubelet[3169]: E0421 10:08:35.668196 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.668282 kubelet[3169]: W0421 10:08:35.668253 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.668282 kubelet[3169]: E0421 10:08:35.668264 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.668735 kubelet[3169]: E0421 10:08:35.668721 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.668904 kubelet[3169]: W0421 10:08:35.668774 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.668904 kubelet[3169]: E0421 10:08:35.668787 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.671132 kubelet[3169]: E0421 10:08:35.671119 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.671351 kubelet[3169]: W0421 10:08:35.671234 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.671351 kubelet[3169]: E0421 10:08:35.671251 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.672187 kubelet[3169]: E0421 10:08:35.672175 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.672918 kubelet[3169]: W0421 10:08:35.672769 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.672918 kubelet[3169]: E0421 10:08:35.672791 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.673013 kubelet[3169]: E0421 10:08:35.672995 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.673013 kubelet[3169]: W0421 10:08:35.673010 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.673066 kubelet[3169]: E0421 10:08:35.673019 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.673247 kubelet[3169]: E0421 10:08:35.673227 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.673247 kubelet[3169]: W0421 10:08:35.673240 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.673321 kubelet[3169]: E0421 10:08:35.673251 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.673691 kubelet[3169]: E0421 10:08:35.673671 3169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:08:35.673691 kubelet[3169]: W0421 10:08:35.673685 3169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:08:35.673767 kubelet[3169]: E0421 10:08:35.673696 3169 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:08:35.675395 systemd[1]: cri-containerd-12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d.scope: Deactivated successfully. Apr 21 10:08:36.354704 systemd[1]: run-containerd-runc-k8s.io-12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d-runc.WAc8z3.mount: Deactivated successfully. Apr 21 10:08:36.355035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d-rootfs.mount: Deactivated successfully. Apr 21 10:08:36.487724 kubelet[3169]: E0421 10:08:36.487475 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:36.602294 kubelet[3169]: I0421 10:08:36.602151 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b9f468549-8ctvz" podStartSLOduration=3.6890097649999998 podStartE2EDuration="5.602135524s" podCreationTimestamp="2026-04-21 10:08:31 +0000 UTC" firstStartedPulling="2026-04-21 10:08:32.4357668 +0000 UTC m=+26.067279271" lastFinishedPulling="2026-04-21 10:08:34.348892559 +0000 UTC m=+27.980405030" observedRunningTime="2026-04-21 10:08:34.614770074 +0000 UTC m=+28.246282545" watchObservedRunningTime="2026-04-21 10:08:36.602135524 +0000 UTC m=+30.233647995" Apr 21 10:08:36.753338 containerd[1723]: time="2026-04-21T10:08:36.753211739Z" level=info msg="shim disconnected" id=12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d namespace=k8s.io Apr 21 10:08:36.753338 containerd[1723]: time="2026-04-21T10:08:36.753268819Z" level=warning msg="cleaning up after shim disconnected" id=12ff57c3cb030cdae79878244153a4c23f2b795f40df5ad9c9aaa38c3dfe702d namespace=k8s.io Apr 21 10:08:36.753338 containerd[1723]: time="2026-04-21T10:08:36.753277939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:37.585287 containerd[1723]: time="2026-04-21T10:08:37.585043922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:08:38.488261 kubelet[3169]: E0421 10:08:38.487840 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:40.487880 kubelet[3169]: E0421 10:08:40.487564 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:41.803406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122639222.mount: Deactivated successfully. Apr 21 10:08:41.911221 containerd[1723]: time="2026-04-21T10:08:41.911166791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:41.914158 containerd[1723]: time="2026-04-21T10:08:41.914005988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 21 10:08:41.918365 containerd[1723]: time="2026-04-21T10:08:41.917042824Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:41.921538 containerd[1723]: time="2026-04-21T10:08:41.921475179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:41.922513 containerd[1723]: time="2026-04-21T10:08:41.922049058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.336964656s" Apr 21 10:08:41.922513 containerd[1723]: time="2026-04-21T10:08:41.922084578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 21 10:08:41.929803 containerd[1723]: time="2026-04-21T10:08:41.929747729Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:08:41.961702 containerd[1723]: time="2026-04-21T10:08:41.961609610Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94\"" Apr 21 10:08:41.963054 containerd[1723]: time="2026-04-21T10:08:41.962247889Z" level=info msg="StartContainer for \"1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94\"" Apr 21 10:08:41.996457 systemd[1]: Started cri-containerd-1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94.scope - libcontainer container 1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94. Apr 21 10:08:42.028696 containerd[1723]: time="2026-04-21T10:08:42.028654328Z" level=info msg="StartContainer for \"1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94\" returns successfully" Apr 21 10:08:42.063568 systemd[1]: cri-containerd-1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94.scope: Deactivated successfully. Apr 21 10:08:42.852632 kubelet[3169]: E0421 10:08:42.488212 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:42.802281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94-rootfs.mount: Deactivated successfully. Apr 21 10:08:43.668933 containerd[1723]: time="2026-04-21T10:08:43.668759533Z" level=info msg="shim disconnected" id=1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94 namespace=k8s.io Apr 21 10:08:43.668933 containerd[1723]: time="2026-04-21T10:08:43.668811973Z" level=warning msg="cleaning up after shim disconnected" id=1cf3c33f995ecebc463e89c5b37f6c40e2fa1db011f25169d112335ce0c31e94 namespace=k8s.io Apr 21 10:08:43.668933 containerd[1723]: time="2026-04-21T10:08:43.668819653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:44.306585 kubelet[3169]: I0421 10:08:44.306384 3169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:08:44.491291 kubelet[3169]: E0421 10:08:44.490112 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:44.599132 containerd[1723]: time="2026-04-21T10:08:44.598810309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:08:46.489926 kubelet[3169]: E0421 10:08:46.489874 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:46.945419 containerd[1723]: time="2026-04-21T10:08:46.944586202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:46.948037 containerd[1723]: time="2026-04-21T10:08:46.947996558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 21 10:08:46.950945 containerd[1723]: time="2026-04-21T10:08:46.950921914Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:46.954913 containerd[1723]: time="2026-04-21T10:08:46.954879509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:46.955757 containerd[1723]: time="2026-04-21T10:08:46.955729188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.356879759s" Apr 21 10:08:46.955832 containerd[1723]: time="2026-04-21T10:08:46.955758988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 21 10:08:46.963908 containerd[1723]: time="2026-04-21T10:08:46.963874378Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:08:46.997554 containerd[1723]: time="2026-04-21T10:08:46.997487255Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157\"" Apr 21 10:08:46.999436 containerd[1723]: time="2026-04-21T10:08:46.998010694Z" level=info msg="StartContainer for \"9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157\"" Apr 21 10:08:47.036397 systemd[1]: Started cri-containerd-9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157.scope - libcontainer container 9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157. Apr 21 10:08:47.064900 containerd[1723]: time="2026-04-21T10:08:47.064679329Z" level=info msg="StartContainer for \"9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157\" returns successfully" Apr 21 10:08:48.252171 systemd[1]: cri-containerd-9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157.scope: Deactivated successfully. Apr 21 10:08:48.271980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157-rootfs.mount: Deactivated successfully. Apr 21 10:08:48.340420 kubelet[3169]: I0421 10:08:48.339793 3169 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 10:08:49.126279 systemd[1]: Created slice kubepods-besteffort-podca3b0e51_e445_4caa_9b05_d450087178fc.slice - libcontainer container kubepods-besteffort-podca3b0e51_e445_4caa_9b05_d450087178fc.slice. Apr 21 10:08:49.126888 containerd[1723]: time="2026-04-21T10:08:49.126770304Z" level=info msg="shim disconnected" id=9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157 namespace=k8s.io Apr 21 10:08:49.126888 containerd[1723]: time="2026-04-21T10:08:49.126829184Z" level=warning msg="cleaning up after shim disconnected" id=9a73732a923d62bc1430d48accb84b04e1ea234ee31bb4389d86b52788cb5157 namespace=k8s.io Apr 21 10:08:49.126888 containerd[1723]: time="2026-04-21T10:08:49.126838264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:08:49.141753 kubelet[3169]: I0421 10:08:49.141288 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99-config-volume\") pod \"coredns-66bc5c9577-8mkbc\" (UID: \"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99\") " pod="kube-system/coredns-66bc5c9577-8mkbc" Apr 21 10:08:49.141753 kubelet[3169]: I0421 10:08:49.141375 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-nginx-config\") pod \"whisker-766d5c7cc4-7z5lm\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.141753 kubelet[3169]: I0421 10:08:49.141393 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvr4n\" (UniqueName: \"kubernetes.io/projected/0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99-kube-api-access-hvr4n\") pod \"coredns-66bc5c9577-8mkbc\" (UID: \"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99\") " pod="kube-system/coredns-66bc5c9577-8mkbc" Apr 21 10:08:49.141753 kubelet[3169]: I0421 10:08:49.141414 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfb8b\" (UniqueName: \"kubernetes.io/projected/c19ce9da-a445-4b47-b1e9-d94a16ff8986-kube-api-access-jfb8b\") pod \"whisker-766d5c7cc4-7z5lm\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.144549 kubelet[3169]: I0421 10:08:49.142715 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-backend-key-pair\") pod \"whisker-766d5c7cc4-7z5lm\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.144549 kubelet[3169]: I0421 10:08:49.142749 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-ca-bundle\") pod \"whisker-766d5c7cc4-7z5lm\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.142944 systemd[1]: Created slice kubepods-burstable-pod0c4c4f7a_7f93_41ec_8ebb_ecaa56e9cc99.slice - libcontainer container kubepods-burstable-pod0c4c4f7a_7f93_41ec_8ebb_ecaa56e9cc99.slice. Apr 21 10:08:49.149914 containerd[1723]: time="2026-04-21T10:08:49.149875035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvhct,Uid:ca3b0e51-e445-4caa-9b05-d450087178fc,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.159302 systemd[1]: Created slice kubepods-besteffort-podc19ce9da_a445_4b47_b1e9_d94a16ff8986.slice - libcontainer container kubepods-besteffort-podc19ce9da_a445_4b47_b1e9_d94a16ff8986.slice. Apr 21 10:08:49.168338 systemd[1]: Created slice kubepods-besteffort-pod05b48f72_8534_4b03_b630_b58ade237fcc.slice - libcontainer container kubepods-besteffort-pod05b48f72_8534_4b03_b630_b58ade237fcc.slice. Apr 21 10:08:49.178030 systemd[1]: Created slice kubepods-besteffort-podfd0b3f4f_bf33_432f_baf4_3f22428628b4.slice - libcontainer container kubepods-besteffort-podfd0b3f4f_bf33_432f_baf4_3f22428628b4.slice. Apr 21 10:08:49.189850 systemd[1]: Created slice kubepods-besteffort-pod8cfed430_eeda_4f9a_8290_98a3835b5d7c.slice - libcontainer container kubepods-besteffort-pod8cfed430_eeda_4f9a_8290_98a3835b5d7c.slice. Apr 21 10:08:49.216558 systemd[1]: Created slice kubepods-burstable-pode7c5d3e9_033b_4479_b2b2_6500dd6b1041.slice - libcontainer container kubepods-burstable-pode7c5d3e9_033b_4479_b2b2_6500dd6b1041.slice. Apr 21 10:08:49.226697 systemd[1]: Created slice kubepods-besteffort-pod098e608f_9fb3_48c3_ba27_9bbae9770798.slice - libcontainer container kubepods-besteffort-pod098e608f_9fb3_48c3_ba27_9bbae9770798.slice. Apr 21 10:08:49.243952 kubelet[3169]: I0421 10:08:49.243909 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098e608f-9fb3-48c3-ba27-9bbae9770798-config\") pod \"goldmane-cccfbd5cf-5mhdm\" (UID: \"098e608f-9fb3-48c3-ba27-9bbae9770798\") " pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.243952 kubelet[3169]: I0421 10:08:49.243948 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/098e608f-9fb3-48c3-ba27-9bbae9770798-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-5mhdm\" (UID: \"098e608f-9fb3-48c3-ba27-9bbae9770798\") " pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.244112 kubelet[3169]: I0421 10:08:49.243967 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0b3f4f-bf33-432f-baf4-3f22428628b4-tigera-ca-bundle\") pod \"calico-kube-controllers-8d46b4c69-bmlm6\" (UID: \"fd0b3f4f-bf33-432f-baf4-3f22428628b4\") " pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" Apr 21 10:08:49.244112 kubelet[3169]: I0421 10:08:49.244020 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7c5d3e9-033b-4479-b2b2-6500dd6b1041-config-volume\") pod \"coredns-66bc5c9577-wht27\" (UID: \"e7c5d3e9-033b-4479-b2b2-6500dd6b1041\") " pod="kube-system/coredns-66bc5c9577-wht27" Apr 21 10:08:49.244112 kubelet[3169]: I0421 10:08:49.244035 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcrt\" (UniqueName: \"kubernetes.io/projected/098e608f-9fb3-48c3-ba27-9bbae9770798-kube-api-access-nxcrt\") pod \"goldmane-cccfbd5cf-5mhdm\" (UID: \"098e608f-9fb3-48c3-ba27-9bbae9770798\") " pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.244112 kubelet[3169]: I0421 10:08:49.244061 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbxk\" (UniqueName: \"kubernetes.io/projected/e7c5d3e9-033b-4479-b2b2-6500dd6b1041-kube-api-access-xdbxk\") pod \"coredns-66bc5c9577-wht27\" (UID: \"e7c5d3e9-033b-4479-b2b2-6500dd6b1041\") " pod="kube-system/coredns-66bc5c9577-wht27" Apr 21 10:08:49.244112 kubelet[3169]: I0421 10:08:49.244087 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlmj\" (UniqueName: \"kubernetes.io/projected/fd0b3f4f-bf33-432f-baf4-3f22428628b4-kube-api-access-jmlmj\") pod \"calico-kube-controllers-8d46b4c69-bmlm6\" (UID: \"fd0b3f4f-bf33-432f-baf4-3f22428628b4\") " pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" Apr 21 10:08:49.244306 kubelet[3169]: I0421 10:08:49.244103 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05b48f72-8534-4b03-b630-b58ade237fcc-calico-apiserver-certs\") pod \"calico-apiserver-787c77bcf4-fql5l\" (UID: \"05b48f72-8534-4b03-b630-b58ade237fcc\") " pod="calico-system/calico-apiserver-787c77bcf4-fql5l" Apr 21 10:08:49.244306 kubelet[3169]: I0421 10:08:49.244120 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8cfed430-eeda-4f9a-8290-98a3835b5d7c-calico-apiserver-certs\") pod \"calico-apiserver-787c77bcf4-qjb7k\" (UID: \"8cfed430-eeda-4f9a-8290-98a3835b5d7c\") " pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" Apr 21 10:08:49.244306 kubelet[3169]: I0421 10:08:49.244159 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrn6\" (UniqueName: \"kubernetes.io/projected/05b48f72-8534-4b03-b630-b58ade237fcc-kube-api-access-8lrn6\") pod \"calico-apiserver-787c77bcf4-fql5l\" (UID: \"05b48f72-8534-4b03-b630-b58ade237fcc\") " pod="calico-system/calico-apiserver-787c77bcf4-fql5l" Apr 21 10:08:49.244306 kubelet[3169]: I0421 10:08:49.244176 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq8r\" (UniqueName: \"kubernetes.io/projected/8cfed430-eeda-4f9a-8290-98a3835b5d7c-kube-api-access-mjq8r\") pod \"calico-apiserver-787c77bcf4-qjb7k\" (UID: \"8cfed430-eeda-4f9a-8290-98a3835b5d7c\") " pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" Apr 21 10:08:49.244306 kubelet[3169]: I0421 10:08:49.244189 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/098e608f-9fb3-48c3-ba27-9bbae9770798-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-5mhdm\" (UID: \"098e608f-9fb3-48c3-ba27-9bbae9770798\") " pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.283364 containerd[1723]: time="2026-04-21T10:08:49.283305385Z" level=error msg="Failed to destroy network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.285882 containerd[1723]: time="2026-04-21T10:08:49.283692904Z" level=error msg="encountered an error cleaning up failed sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.285882 containerd[1723]: time="2026-04-21T10:08:49.283744904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvhct,Uid:ca3b0e51-e445-4caa-9b05-d450087178fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.286000 kubelet[3169]: E0421 10:08:49.283970 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.286000 kubelet[3169]: E0421 10:08:49.284042 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:49.286000 kubelet[3169]: E0421 10:08:49.284061 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvhct" Apr 21 10:08:49.286098 kubelet[3169]: E0421 10:08:49.284106 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qvhct_calico-system(ca3b0e51-e445-4caa-9b05-d450087178fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qvhct_calico-system(ca3b0e51-e445-4caa-9b05-d450087178fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:49.286865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85-shm.mount: Deactivated successfully. Apr 21 10:08:49.462041 containerd[1723]: time="2026-04-21T10:08:49.461936837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8mkbc,Uid:0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:49.476247 containerd[1723]: time="2026-04-21T10:08:49.475986299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766d5c7cc4-7z5lm,Uid:c19ce9da-a445-4b47-b1e9-d94a16ff8986,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.482980 containerd[1723]: time="2026-04-21T10:08:49.482775291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-fql5l,Uid:05b48f72-8534-4b03-b630-b58ade237fcc,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.493134 containerd[1723]: time="2026-04-21T10:08:49.492923598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d46b4c69-bmlm6,Uid:fd0b3f4f-bf33-432f-baf4-3f22428628b4,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.511768 containerd[1723]: time="2026-04-21T10:08:49.511525854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-qjb7k,Uid:8cfed430-eeda-4f9a-8290-98a3835b5d7c,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.533239 containerd[1723]: time="2026-04-21T10:08:49.532778067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wht27,Uid:e7c5d3e9-033b-4479-b2b2-6500dd6b1041,Namespace:kube-system,Attempt:0,}" Apr 21 10:08:49.538389 containerd[1723]: time="2026-04-21T10:08:49.538265900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5mhdm,Uid:098e608f-9fb3-48c3-ba27-9bbae9770798,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:49.582125 containerd[1723]: time="2026-04-21T10:08:49.581929084Z" level=error msg="Failed to destroy network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.582564 containerd[1723]: time="2026-04-21T10:08:49.582427484Z" level=error msg="encountered an error cleaning up failed sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.582564 containerd[1723]: time="2026-04-21T10:08:49.582482964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8mkbc,Uid:0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.582712 kubelet[3169]: E0421 10:08:49.582678 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.583015 kubelet[3169]: E0421 10:08:49.582729 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8mkbc" Apr 21 10:08:49.583015 kubelet[3169]: E0421 10:08:49.582748 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8mkbc" Apr 21 10:08:49.583015 kubelet[3169]: E0421 10:08:49.582795 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8mkbc_kube-system(0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8mkbc_kube-system(0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8mkbc" podUID="0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99" Apr 21 10:08:49.615918 kubelet[3169]: I0421 10:08:49.615886 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:08:49.618454 containerd[1723]: time="2026-04-21T10:08:49.618185518Z" level=info msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" Apr 21 10:08:49.618738 containerd[1723]: time="2026-04-21T10:08:49.618451238Z" level=info msg="Ensure that sandbox 29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85 in task-service has been cleanup successfully" Apr 21 10:08:49.625159 kubelet[3169]: I0421 10:08:49.624627 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:08:49.626884 containerd[1723]: time="2026-04-21T10:08:49.626847187Z" level=info msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" Apr 21 10:08:49.627376 containerd[1723]: time="2026-04-21T10:08:49.627263787Z" level=info msg="Ensure that sandbox 47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037 in task-service has been cleanup successfully" Apr 21 10:08:49.668910 containerd[1723]: time="2026-04-21T10:08:49.668863214Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:08:49.708333 containerd[1723]: time="2026-04-21T10:08:49.708273844Z" level=error msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" failed" error="failed to destroy network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.710009 kubelet[3169]: E0421 10:08:49.709969 3169 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:08:49.710118 kubelet[3169]: E0421 10:08:49.710024 3169 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85"} Apr 21 10:08:49.710118 kubelet[3169]: E0421 10:08:49.710071 3169 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca3b0e51-e445-4caa-9b05-d450087178fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:08:49.710118 kubelet[3169]: E0421 10:08:49.710100 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca3b0e51-e445-4caa-9b05-d450087178fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qvhct" podUID="ca3b0e51-e445-4caa-9b05-d450087178fc" Apr 21 10:08:49.734075 containerd[1723]: time="2026-04-21T10:08:49.733785491Z" level=info msg="CreateContainer within sandbox \"aaac2b15c08ed51f50e42cda5f1528f5ed7571bba69b042a790f951858eb022a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8bc413d396c593326249e2ebc430e0187b0e1308b6cf157afb6efd655750c3bf\"" Apr 21 10:08:49.738372 containerd[1723]: time="2026-04-21T10:08:49.738271645Z" level=info msg="StartContainer for \"8bc413d396c593326249e2ebc430e0187b0e1308b6cf157afb6efd655750c3bf\"" Apr 21 10:08:49.763731 containerd[1723]: time="2026-04-21T10:08:49.763676093Z" level=error msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" failed" error="failed to destroy network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.764114 kubelet[3169]: E0421 10:08:49.763900 3169 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:08:49.764114 kubelet[3169]: E0421 10:08:49.763948 3169 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037"} Apr 21 10:08:49.764114 kubelet[3169]: E0421 10:08:49.763980 3169 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:08:49.764114 kubelet[3169]: E0421 10:08:49.764004 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8mkbc" podUID="0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99" Apr 21 10:08:49.806104 systemd[1]: Started cri-containerd-8bc413d396c593326249e2ebc430e0187b0e1308b6cf157afb6efd655750c3bf.scope - libcontainer container 8bc413d396c593326249e2ebc430e0187b0e1308b6cf157afb6efd655750c3bf. Apr 21 10:08:49.874034 containerd[1723]: time="2026-04-21T10:08:49.873808553Z" level=error msg="Failed to destroy network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.874678 containerd[1723]: time="2026-04-21T10:08:49.874301392Z" level=info msg="StartContainer for \"8bc413d396c593326249e2ebc430e0187b0e1308b6cf157afb6efd655750c3bf\" returns successfully" Apr 21 10:08:49.874851 containerd[1723]: time="2026-04-21T10:08:49.874362112Z" level=error msg="Failed to destroy network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.875386 containerd[1723]: time="2026-04-21T10:08:49.875188271Z" level=error msg="encountered an error cleaning up failed sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.875386 containerd[1723]: time="2026-04-21T10:08:49.875261671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d46b4c69-bmlm6,Uid:fd0b3f4f-bf33-432f-baf4-3f22428628b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.875508 kubelet[3169]: E0421 10:08:49.875455 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.875551 kubelet[3169]: E0421 10:08:49.875536 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" Apr 21 10:08:49.875583 kubelet[3169]: E0421 10:08:49.875555 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" Apr 21 10:08:49.875823 kubelet[3169]: E0421 10:08:49.875601 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8d46b4c69-bmlm6_calico-system(fd0b3f4f-bf33-432f-baf4-3f22428628b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8d46b4c69-bmlm6_calico-system(fd0b3f4f-bf33-432f-baf4-3f22428628b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" podUID="fd0b3f4f-bf33-432f-baf4-3f22428628b4" Apr 21 10:08:49.880541 containerd[1723]: time="2026-04-21T10:08:49.880502184Z" level=error msg="Failed to destroy network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.880819 containerd[1723]: time="2026-04-21T10:08:49.880788504Z" level=error msg="encountered an error cleaning up failed sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.880879 containerd[1723]: time="2026-04-21T10:08:49.880834264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766d5c7cc4-7z5lm,Uid:c19ce9da-a445-4b47-b1e9-d94a16ff8986,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.881858 kubelet[3169]: E0421 10:08:49.880992 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.881858 kubelet[3169]: E0421 10:08:49.881033 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.881858 kubelet[3169]: E0421 10:08:49.881049 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-766d5c7cc4-7z5lm" Apr 21 10:08:49.881960 kubelet[3169]: E0421 10:08:49.881086 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-766d5c7cc4-7z5lm_calico-system(c19ce9da-a445-4b47-b1e9-d94a16ff8986)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-766d5c7cc4-7z5lm_calico-system(c19ce9da-a445-4b47-b1e9-d94a16ff8986)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-766d5c7cc4-7z5lm" podUID="c19ce9da-a445-4b47-b1e9-d94a16ff8986" Apr 21 10:08:49.885531 containerd[1723]: time="2026-04-21T10:08:49.883359421Z" level=error msg="encountered an error cleaning up failed sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.885531 containerd[1723]: time="2026-04-21T10:08:49.883451461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5mhdm,Uid:098e608f-9fb3-48c3-ba27-9bbae9770798,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.885682 kubelet[3169]: E0421 10:08:49.885354 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.885682 kubelet[3169]: E0421 10:08:49.885406 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.885682 kubelet[3169]: E0421 10:08:49.885436 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-5mhdm" Apr 21 10:08:49.885771 kubelet[3169]: E0421 10:08:49.885483 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-5mhdm_calico-system(098e608f-9fb3-48c3-ba27-9bbae9770798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-5mhdm_calico-system(098e608f-9fb3-48c3-ba27-9bbae9770798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-5mhdm" podUID="098e608f-9fb3-48c3-ba27-9bbae9770798" Apr 21 10:08:49.904181 containerd[1723]: time="2026-04-21T10:08:49.904117634Z" level=error msg="Failed to destroy network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.905727 containerd[1723]: time="2026-04-21T10:08:49.905695432Z" level=error msg="encountered an error cleaning up failed sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.905869 containerd[1723]: time="2026-04-21T10:08:49.905848352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wht27,Uid:e7c5d3e9-033b-4479-b2b2-6500dd6b1041,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.906295 kubelet[3169]: E0421 10:08:49.906221 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.906434 kubelet[3169]: E0421 10:08:49.906413 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wht27" Apr 21 10:08:49.906526 kubelet[3169]: E0421 10:08:49.906510 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wht27" Apr 21 10:08:49.906635 kubelet[3169]: E0421 10:08:49.906614 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wht27_kube-system(e7c5d3e9-033b-4479-b2b2-6500dd6b1041)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wht27_kube-system(e7c5d3e9-033b-4479-b2b2-6500dd6b1041)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wht27" podUID="e7c5d3e9-033b-4479-b2b2-6500dd6b1041" Apr 21 10:08:49.908460 containerd[1723]: time="2026-04-21T10:08:49.908425949Z" level=error msg="Failed to destroy network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.908747 containerd[1723]: time="2026-04-21T10:08:49.908716628Z" level=error msg="encountered an error cleaning up failed sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.908804 containerd[1723]: time="2026-04-21T10:08:49.908769628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-fql5l,Uid:05b48f72-8534-4b03-b630-b58ade237fcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.908993 kubelet[3169]: E0421 10:08:49.908968 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.909509 kubelet[3169]: E0421 10:08:49.909473 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-787c77bcf4-fql5l" Apr 21 10:08:49.909595 kubelet[3169]: E0421 10:08:49.909510 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-787c77bcf4-fql5l" Apr 21 10:08:49.909595 kubelet[3169]: E0421 10:08:49.909556 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-787c77bcf4-fql5l_calico-system(05b48f72-8534-4b03-b630-b58ade237fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-787c77bcf4-fql5l_calico-system(05b48f72-8534-4b03-b630-b58ade237fcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-787c77bcf4-fql5l" podUID="05b48f72-8534-4b03-b630-b58ade237fcc" Apr 21 10:08:49.921896 containerd[1723]: time="2026-04-21T10:08:49.921846212Z" level=error msg="Failed to destroy network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.922190 containerd[1723]: time="2026-04-21T10:08:49.922154211Z" level=error msg="encountered an error cleaning up failed sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.922270 containerd[1723]: time="2026-04-21T10:08:49.922209331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-qjb7k,Uid:8cfed430-eeda-4f9a-8290-98a3835b5d7c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.922461 kubelet[3169]: E0421 10:08:49.922432 3169 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:08:49.922891 kubelet[3169]: E0421 10:08:49.922530 3169 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" Apr 21 10:08:49.922891 kubelet[3169]: E0421 10:08:49.922554 3169 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" Apr 21 10:08:49.922891 kubelet[3169]: E0421 10:08:49.922607 3169 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-787c77bcf4-qjb7k_calico-system(8cfed430-eeda-4f9a-8290-98a3835b5d7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-787c77bcf4-qjb7k_calico-system(8cfed430-eeda-4f9a-8290-98a3835b5d7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" podUID="8cfed430-eeda-4f9a-8290-98a3835b5d7c" Apr 21 10:08:50.627934 kubelet[3169]: I0421 10:08:50.627897 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:08:50.629573 containerd[1723]: time="2026-04-21T10:08:50.629322328Z" level=info msg="StopPodSandbox for \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\"" Apr 21 10:08:50.629573 containerd[1723]: time="2026-04-21T10:08:50.629488888Z" level=info msg="Ensure that sandbox 298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736 in task-service has been cleanup successfully" Apr 21 10:08:50.630914 kubelet[3169]: I0421 10:08:50.630883 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:08:50.631691 containerd[1723]: time="2026-04-21T10:08:50.631487165Z" level=info msg="StopPodSandbox for \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\"" Apr 21 10:08:50.631691 containerd[1723]: time="2026-04-21T10:08:50.631669605Z" level=info msg="Ensure that sandbox fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985 in task-service has been cleanup successfully" Apr 21 10:08:50.633801 kubelet[3169]: I0421 10:08:50.633784 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:08:50.635364 containerd[1723]: time="2026-04-21T10:08:50.635193161Z" level=info msg="StopPodSandbox for \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\"" Apr 21 10:08:50.635601 containerd[1723]: time="2026-04-21T10:08:50.635370601Z" level=info msg="Ensure that sandbox dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd in task-service has been cleanup successfully" Apr 21 10:08:50.638264 kubelet[3169]: I0421 10:08:50.638246 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:08:50.641506 containerd[1723]: time="2026-04-21T10:08:50.641086234Z" level=info msg="StopPodSandbox for \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\"" Apr 21 10:08:50.641506 containerd[1723]: time="2026-04-21T10:08:50.641311273Z" level=info msg="Ensure that sandbox 64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c in task-service has been cleanup successfully" Apr 21 10:08:50.655781 kubelet[3169]: I0421 10:08:50.655750 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:08:50.661459 containerd[1723]: time="2026-04-21T10:08:50.661423729Z" level=info msg="StopPodSandbox for \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\"" Apr 21 10:08:50.662853 containerd[1723]: time="2026-04-21T10:08:50.662501967Z" level=info msg="Ensure that sandbox f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427 in task-service has been cleanup successfully" Apr 21 10:08:50.669644 kubelet[3169]: I0421 10:08:50.669450 3169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:08:50.669851 containerd[1723]: time="2026-04-21T10:08:50.669805319Z" level=info msg="StopPodSandbox for \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\"" Apr 21 10:08:50.669989 containerd[1723]: time="2026-04-21T10:08:50.669968998Z" level=info msg="Ensure that sandbox 4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97 in task-service has been cleanup successfully" Apr 21 10:08:50.676839 kubelet[3169]: I0421 10:08:50.676731 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zm2d5" podStartSLOduration=4.2222216790000004 podStartE2EDuration="18.67671379s" podCreationTimestamp="2026-04-21 10:08:32 +0000 UTC" firstStartedPulling="2026-04-21 10:08:32.502366356 +0000 UTC m=+26.133878827" lastFinishedPulling="2026-04-21 10:08:46.956858467 +0000 UTC m=+40.588370938" observedRunningTime="2026-04-21 10:08:50.674974952 +0000 UTC m=+44.306487423" watchObservedRunningTime="2026-04-21 10:08:50.67671379 +0000 UTC m=+44.308226261" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.817 [INFO][4357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.818 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" iface="eth0" netns="/var/run/netns/cni-d625d18b-ddbd-7a3e-3aef-1f63dec0a048" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.820 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" iface="eth0" netns="/var/run/netns/cni-d625d18b-ddbd-7a3e-3aef-1f63dec0a048" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.823 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" iface="eth0" netns="/var/run/netns/cni-d625d18b-ddbd-7a3e-3aef-1f63dec0a048" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.823 [INFO][4357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.823 [INFO][4357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.902 [INFO][4450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.903 [INFO][4450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.904 [INFO][4450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.932 [WARNING][4450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.932 [INFO][4450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.936 [INFO][4450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:50.951807 containerd[1723]: 2026-04-21 10:08:50.947 [INFO][4357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:08:50.954584 containerd[1723]: time="2026-04-21T10:08:50.952304614Z" level=info msg="TearDown network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" successfully" Apr 21 10:08:50.954584 containerd[1723]: time="2026-04-21T10:08:50.952333974Z" level=info msg="StopPodSandbox for \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" returns successfully" Apr 21 10:08:50.957345 systemd[1]: run-netns-cni\x2dd625d18b\x2dddbd\x2d7a3e\x2d3aef\x2d1f63dec0a048.mount: Deactivated successfully. Apr 21 10:08:50.962810 containerd[1723]: time="2026-04-21T10:08:50.962777041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5mhdm,Uid:098e608f-9fb3-48c3-ba27-9bbae9770798,Namespace:calico-system,Attempt:1,}" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.813 [INFO][4415] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.813 [INFO][4415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" iface="eth0" netns="/var/run/netns/cni-909d80d6-b1ac-23d5-6888-ed1d326b55fc" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.813 [INFO][4415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" iface="eth0" netns="/var/run/netns/cni-909d80d6-b1ac-23d5-6888-ed1d326b55fc" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.816 [INFO][4415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" iface="eth0" netns="/var/run/netns/cni-909d80d6-b1ac-23d5-6888-ed1d326b55fc" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.817 [INFO][4415] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.817 [INFO][4415] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.904 [INFO][4446] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.905 [INFO][4446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.936 [INFO][4446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.960 [WARNING][4446] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.960 [INFO][4446] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.962 [INFO][4446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:50.967845 containerd[1723]: 2026-04-21 10:08:50.965 [INFO][4415] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:08:50.971166 systemd[1]: run-netns-cni\x2d909d80d6\x2db1ac\x2d23d5\x2d6888\x2ded1d326b55fc.mount: Deactivated successfully. Apr 21 10:08:50.971667 containerd[1723]: time="2026-04-21T10:08:50.971547351Z" level=info msg="TearDown network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" successfully" Apr 21 10:08:50.971667 containerd[1723]: time="2026-04-21T10:08:50.971576030Z" level=info msg="StopPodSandbox for \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" returns successfully" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.817 [INFO][4365] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.819 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" iface="eth0" netns="/var/run/netns/cni-8ddcfb70-b54b-d6cf-0db8-10cfd809b8f4" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.819 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" iface="eth0" netns="/var/run/netns/cni-8ddcfb70-b54b-d6cf-0db8-10cfd809b8f4" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.821 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" iface="eth0" netns="/var/run/netns/cni-8ddcfb70-b54b-d6cf-0db8-10cfd809b8f4" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.821 [INFO][4365] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.821 [INFO][4365] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.921 [INFO][4448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.921 [INFO][4448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.963 [INFO][4448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.977 [WARNING][4448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.978 [INFO][4448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.980 [INFO][4448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:50.986619 containerd[1723]: 2026-04-21 10:08:50.983 [INFO][4365] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:08:50.987495 containerd[1723]: time="2026-04-21T10:08:50.987465131Z" level=info msg="TearDown network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" successfully" Apr 21 10:08:50.987586 containerd[1723]: time="2026-04-21T10:08:50.987552211Z" level=info msg="StopPodSandbox for \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" returns successfully" Apr 21 10:08:50.992106 systemd[1]: run-netns-cni\x2d8ddcfb70\x2db54b\x2dd6cf\x2d0db8\x2d10cfd809b8f4.mount: Deactivated successfully. Apr 21 10:08:51.008726 containerd[1723]: time="2026-04-21T10:08:51.008682145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-qjb7k,Uid:8cfed430-eeda-4f9a-8290-98a3835b5d7c,Namespace:calico-system,Attempt:1,}" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.853 [INFO][4381] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.854 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" iface="eth0" netns="/var/run/netns/cni-d6f6f08a-f091-181b-949a-8f536d9aa0bb" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.854 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" iface="eth0" netns="/var/run/netns/cni-d6f6f08a-f091-181b-949a-8f536d9aa0bb" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.858 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" iface="eth0" netns="/var/run/netns/cni-d6f6f08a-f091-181b-949a-8f536d9aa0bb" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.858 [INFO][4381] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.858 [INFO][4381] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.946 [INFO][4467] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.947 [INFO][4467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:50.980 [INFO][4467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:51.002 [WARNING][4467] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:51.002 [INFO][4467] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:51.004 [INFO][4467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.029193 containerd[1723]: 2026-04-21 10:08:51.011 [INFO][4381] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:08:51.030355 containerd[1723]: time="2026-04-21T10:08:51.030309559Z" level=info msg="TearDown network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" successfully" Apr 21 10:08:51.030606 containerd[1723]: time="2026-04-21T10:08:51.030439759Z" level=info msg="StopPodSandbox for \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" returns successfully" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.843 [INFO][4377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.843 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" iface="eth0" netns="/var/run/netns/cni-b947da26-39b3-18b3-c8c5-77b40fd699c9" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.848 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" iface="eth0" netns="/var/run/netns/cni-b947da26-39b3-18b3-c8c5-77b40fd699c9" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.848 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" iface="eth0" netns="/var/run/netns/cni-b947da26-39b3-18b3-c8c5-77b40fd699c9" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.848 [INFO][4377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.848 [INFO][4377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.951 [INFO][4462] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:50.951 [INFO][4462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:51.004 [INFO][4462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:51.017 [WARNING][4462] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:51.017 [INFO][4462] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:51.019 [INFO][4462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.030606 containerd[1723]: 2026-04-21 10:08:51.025 [INFO][4377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:08:51.031470 containerd[1723]: time="2026-04-21T10:08:51.031078718Z" level=info msg="TearDown network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" successfully" Apr 21 10:08:51.031470 containerd[1723]: time="2026-04-21T10:08:51.031097318Z" level=info msg="StopPodSandbox for \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" returns successfully" Apr 21 10:08:51.039971 containerd[1723]: time="2026-04-21T10:08:51.039739547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-fql5l,Uid:05b48f72-8534-4b03-b630-b58ade237fcc,Namespace:calico-system,Attempt:1,}" Apr 21 10:08:51.043870 containerd[1723]: time="2026-04-21T10:08:51.043836182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d46b4c69-bmlm6,Uid:fd0b3f4f-bf33-432f-baf4-3f22428628b4,Namespace:calico-system,Attempt:1,}" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.866 [INFO][4406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.870 [INFO][4406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" iface="eth0" netns="/var/run/netns/cni-385b9f66-6715-d7f6-66bc-097ad70e1b0e" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.871 [INFO][4406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" iface="eth0" netns="/var/run/netns/cni-385b9f66-6715-d7f6-66bc-097ad70e1b0e" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.873 [INFO][4406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" iface="eth0" netns="/var/run/netns/cni-385b9f66-6715-d7f6-66bc-097ad70e1b0e" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.873 [INFO][4406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.873 [INFO][4406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.959 [INFO][4472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:50.959 [INFO][4472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:51.020 [INFO][4472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:51.036 [WARNING][4472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:51.037 [INFO][4472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:51.038 [INFO][4472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.046071 containerd[1723]: 2026-04-21 10:08:51.044 [INFO][4406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:08:51.046686 containerd[1723]: time="2026-04-21T10:08:51.046569859Z" level=info msg="TearDown network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" successfully" Apr 21 10:08:51.046686 containerd[1723]: time="2026-04-21T10:08:51.046598419Z" level=info msg="StopPodSandbox for \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" returns successfully" Apr 21 10:08:51.052239 containerd[1723]: time="2026-04-21T10:08:51.052213452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wht27,Uid:e7c5d3e9-033b-4479-b2b2-6500dd6b1041,Namespace:kube-system,Attempt:1,}" Apr 21 10:08:51.058832 kubelet[3169]: I0421 10:08:51.058655 3169 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-backend-key-pair\") pod \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " Apr 21 10:08:51.058832 kubelet[3169]: I0421 10:08:51.058707 3169 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfb8b\" (UniqueName: \"kubernetes.io/projected/c19ce9da-a445-4b47-b1e9-d94a16ff8986-kube-api-access-jfb8b\") pod \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " Apr 21 10:08:51.058832 kubelet[3169]: I0421 10:08:51.058743 3169 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-nginx-config\") pod \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " Apr 21 10:08:51.061165 kubelet[3169]: I0421 10:08:51.060836 3169 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c19ce9da-a445-4b47-b1e9-d94a16ff8986" (UID: "c19ce9da-a445-4b47-b1e9-d94a16ff8986"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:08:51.066735 kubelet[3169]: I0421 10:08:51.066708 3169 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19ce9da-a445-4b47-b1e9-d94a16ff8986-kube-api-access-jfb8b" (OuterVolumeSpecName: "kube-api-access-jfb8b") pod "c19ce9da-a445-4b47-b1e9-d94a16ff8986" (UID: "c19ce9da-a445-4b47-b1e9-d94a16ff8986"). InnerVolumeSpecName "kube-api-access-jfb8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:08:51.066956 kubelet[3169]: I0421 10:08:51.066937 3169 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-ca-bundle\") pod \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\" (UID: \"c19ce9da-a445-4b47-b1e9-d94a16ff8986\") " Apr 21 10:08:51.067128 kubelet[3169]: I0421 10:08:51.067115 3169 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-nginx-config\") on node \"ci-4081.3.7-a-75af1c63bf\" DevicePath \"\"" Apr 21 10:08:51.067214 kubelet[3169]: I0421 10:08:51.067187 3169 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfb8b\" (UniqueName: \"kubernetes.io/projected/c19ce9da-a445-4b47-b1e9-d94a16ff8986-kube-api-access-jfb8b\") on node \"ci-4081.3.7-a-75af1c63bf\" DevicePath \"\"" Apr 21 10:08:51.067533 kubelet[3169]: I0421 10:08:51.067499 3169 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c19ce9da-a445-4b47-b1e9-d94a16ff8986" (UID: "c19ce9da-a445-4b47-b1e9-d94a16ff8986"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:08:51.067626 kubelet[3169]: I0421 10:08:51.067608 3169 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c19ce9da-a445-4b47-b1e9-d94a16ff8986" (UID: "c19ce9da-a445-4b47-b1e9-d94a16ff8986"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:08:51.168178 kubelet[3169]: I0421 10:08:51.168119 3169 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-ca-bundle\") on node \"ci-4081.3.7-a-75af1c63bf\" DevicePath \"\"" Apr 21 10:08:51.168178 kubelet[3169]: I0421 10:08:51.168155 3169 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c19ce9da-a445-4b47-b1e9-d94a16ff8986-whisker-backend-key-pair\") on node \"ci-4081.3.7-a-75af1c63bf\" DevicePath \"\"" Apr 21 10:08:51.194177 systemd-networkd[1602]: calid8d429d8a37: Link UP Apr 21 10:08:51.199552 systemd-networkd[1602]: calid8d429d8a37: Gained carrier Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.065 [ERROR][4491] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.081 [INFO][4491] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0 goldmane-cccfbd5cf- calico-system 098e608f-9fb3-48c3-ba27-9bbae9770798 893 0 2026-04-21 10:08:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf goldmane-cccfbd5cf-5mhdm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid8d429d8a37 [] [] }} ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.081 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.108 [INFO][4505] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" HandleID="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.117 [INFO][4505] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" HandleID="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"goldmane-cccfbd5cf-5mhdm", "timestamp":"2026-04-21 10:08:51.108208744 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a7080)} Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.117 [INFO][4505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.117 [INFO][4505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.117 [INFO][4505] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.119 [INFO][4505] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.125 [INFO][4505] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.133 [INFO][4505] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.138 [INFO][4505] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.142 [INFO][4505] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.142 [INFO][4505] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.144 [INFO][4505] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88 Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.151 [INFO][4505] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.162 [INFO][4505] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.193/26] block=192.168.69.192/26 handle="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.163 [INFO][4505] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.193/26] handle="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.163 [INFO][4505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.245184 containerd[1723]: 2026-04-21 10:08:51.163 [INFO][4505] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.193/26] IPv6=[] ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" HandleID="k8s-pod-network.755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.169 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"098e608f-9fb3-48c3-ba27-9bbae9770798", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"goldmane-cccfbd5cf-5mhdm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8d429d8a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.170 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.193/32] ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.170 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8d429d8a37 ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.199 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.209 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"098e608f-9fb3-48c3-ba27-9bbae9770798", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88", Pod:"goldmane-cccfbd5cf-5mhdm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8d429d8a37", MAC:"12:06:5b:39:62:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.245724 containerd[1723]: 2026-04-21 10:08:51.239 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88" Namespace="calico-system" Pod="goldmane-cccfbd5cf-5mhdm" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:08:51.299494 systemd[1]: run-netns-cni\x2dd6f6f08a\x2df091\x2d181b\x2d949a\x2d8f536d9aa0bb.mount: Deactivated successfully. Apr 21 10:08:51.300467 systemd[1]: run-netns-cni\x2d385b9f66\x2d6715\x2dd7f6\x2d66bc\x2d097ad70e1b0e.mount: Deactivated successfully. Apr 21 10:08:51.300524 systemd[1]: run-netns-cni\x2db947da26\x2d39b3\x2d18b3\x2dc8c5\x2d77b40fd699c9.mount: Deactivated successfully. Apr 21 10:08:51.300572 systemd[1]: var-lib-kubelet-pods-c19ce9da\x2da445\x2d4b47\x2db1e9\x2dd94a16ff8986-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfb8b.mount: Deactivated successfully. Apr 21 10:08:51.300623 systemd[1]: var-lib-kubelet-pods-c19ce9da\x2da445\x2d4b47\x2db1e9\x2dd94a16ff8986-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:08:51.403133 containerd[1723]: time="2026-04-21T10:08:51.400383027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:51.403133 containerd[1723]: time="2026-04-21T10:08:51.400437907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:51.403133 containerd[1723]: time="2026-04-21T10:08:51.400456387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.403133 containerd[1723]: time="2026-04-21T10:08:51.400543987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.471377 systemd-networkd[1602]: cali4cdd4e056a8: Link UP Apr 21 10:08:51.472027 systemd-networkd[1602]: cali4cdd4e056a8: Gained carrier Apr 21 10:08:51.508404 systemd[1]: Started cri-containerd-755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88.scope - libcontainer container 755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88. Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.212 [ERROR][4520] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.242 [INFO][4520] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0 calico-apiserver-787c77bcf4- calico-system 05b48f72-8534-4b03-b630-b58ade237fcc 896 0 2026-04-21 10:08:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:787c77bcf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf calico-apiserver-787c77bcf4-fql5l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4cdd4e056a8 [] [] }} ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.242 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.337 [INFO][4584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" HandleID="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.377 [INFO][4584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" HandleID="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"calico-apiserver-787c77bcf4-fql5l", "timestamp":"2026-04-21 10:08:51.337491384 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400050cb00)} Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.377 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.377 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.377 [INFO][4584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.387 [INFO][4584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.398 [INFO][4584] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.411 [INFO][4584] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.415 [INFO][4584] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.427 [INFO][4584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.427 [INFO][4584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.429 [INFO][4584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8 Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.447 [INFO][4584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.454 [INFO][4584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.194/26] block=192.168.69.192/26 handle="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.454 [INFO][4584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.194/26] handle="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.454 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.516800 containerd[1723]: 2026-04-21 10:08:51.454 [INFO][4584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.194/26] IPv6=[] ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" HandleID="k8s-pod-network.24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.465 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"05b48f72-8534-4b03-b630-b58ade237fcc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"calico-apiserver-787c77bcf4-fql5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4cdd4e056a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.465 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.194/32] ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.465 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cdd4e056a8 ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.493 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.496 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"05b48f72-8534-4b03-b630-b58ade237fcc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8", Pod:"calico-apiserver-787c77bcf4-fql5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4cdd4e056a8", MAC:"e2:46:81:ac:86:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.517481 containerd[1723]: 2026-04-21 10:08:51.514 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-fql5l" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:08:51.555137 containerd[1723]: time="2026-04-21T10:08:51.554952599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:51.555137 containerd[1723]: time="2026-04-21T10:08:51.555064639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:51.555137 containerd[1723]: time="2026-04-21T10:08:51.555081679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.557383 containerd[1723]: time="2026-04-21T10:08:51.555196799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.583316 systemd-networkd[1602]: cali6e2b6fa416d: Link UP Apr 21 10:08:51.584750 systemd-networkd[1602]: cali6e2b6fa416d: Gained carrier Apr 21 10:08:51.633127 containerd[1723]: time="2026-04-21T10:08:51.633002904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-5mhdm,Uid:098e608f-9fb3-48c3-ba27-9bbae9770798,Namespace:calico-system,Attempt:1,} returns sandbox id \"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88\"" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.185 [ERROR][4511] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.237 [INFO][4511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0 calico-apiserver-787c77bcf4- calico-system 8cfed430-eeda-4f9a-8290-98a3835b5d7c 894 0 2026-04-21 10:08:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:787c77bcf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf calico-apiserver-787c77bcf4-qjb7k eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6e2b6fa416d [] [] }} ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.237 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.398 [INFO][4593] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" HandleID="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.433 [INFO][4593] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" HandleID="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fa8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"calico-apiserver-787c77bcf4-qjb7k", "timestamp":"2026-04-21 10:08:51.39826151 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003ac420)} Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.433 [INFO][4593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.455 [INFO][4593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.455 [INFO][4593] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.494 [INFO][4593] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.505 [INFO][4593] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.518 [INFO][4593] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.522 [INFO][4593] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.530 [INFO][4593] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.530 [INFO][4593] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.533 [INFO][4593] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8 Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.547 [INFO][4593] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.565 [INFO][4593] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.195/26] block=192.168.69.192/26 handle="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.565 [INFO][4593] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.195/26] handle="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.566 [INFO][4593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.638905 containerd[1723]: 2026-04-21 10:08:51.566 [INFO][4593] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.195/26] IPv6=[] ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" HandleID="k8s-pod-network.46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.638538 systemd[1]: Started cri-containerd-24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8.scope - libcontainer container 24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8. Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.579 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"8cfed430-eeda-4f9a-8290-98a3835b5d7c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"calico-apiserver-787c77bcf4-qjb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e2b6fa416d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.579 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.195/32] ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.579 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e2b6fa416d ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.585 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.588 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"8cfed430-eeda-4f9a-8290-98a3835b5d7c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8", Pod:"calico-apiserver-787c77bcf4-qjb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e2b6fa416d", MAC:"1a:5d:d5:df:98:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.640749 containerd[1723]: 2026-04-21 10:08:51.622 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8" Namespace="calico-system" Pod="calico-apiserver-787c77bcf4-qjb7k" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:08:51.643123 containerd[1723]: time="2026-04-21T10:08:51.641846933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:08:51.684269 containerd[1723]: time="2026-04-21T10:08:51.683220362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:51.685108 containerd[1723]: time="2026-04-21T10:08:51.684085721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:51.685108 containerd[1723]: time="2026-04-21T10:08:51.684113361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.690412 containerd[1723]: time="2026-04-21T10:08:51.688142076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.695814 systemd[1]: Removed slice kubepods-besteffort-podc19ce9da_a445_4b47_b1e9_d94a16ff8986.slice - libcontainer container kubepods-besteffort-podc19ce9da_a445_4b47_b1e9_d94a16ff8986.slice. Apr 21 10:08:51.710132 systemd-networkd[1602]: cali77830c93bd6: Link UP Apr 21 10:08:51.713041 systemd-networkd[1602]: cali77830c93bd6: Gained carrier Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.270 [ERROR][4537] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.314 [INFO][4537] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0 coredns-66bc5c9577- kube-system e7c5d3e9-033b-4479-b2b2-6500dd6b1041 897 0 2026-04-21 10:08:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf coredns-66bc5c9577-wht27 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77830c93bd6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.314 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.434 [INFO][4624] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" HandleID="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.455 [INFO][4624] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" HandleID="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"coredns-66bc5c9577-wht27", "timestamp":"2026-04-21 10:08:51.434730066 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000261080)} Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.455 [INFO][4624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.565 [INFO][4624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.565 [INFO][4624] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.590 [INFO][4624] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.610 [INFO][4624] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.631 [INFO][4624] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.640 [INFO][4624] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.655 [INFO][4624] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.656 [INFO][4624] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.659 [INFO][4624] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381 Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.668 [INFO][4624] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.681 [INFO][4624] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.196/26] block=192.168.69.192/26 handle="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.682 [INFO][4624] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.196/26] handle="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.683 [INFO][4624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.754660 containerd[1723]: 2026-04-21 10:08:51.683 [INFO][4624] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.196/26] IPv6=[] ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" HandleID="k8s-pod-network.91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.757295 containerd[1723]: 2026-04-21 10:08:51.702 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e7c5d3e9-033b-4479-b2b2-6500dd6b1041", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"coredns-66bc5c9577-wht27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77830c93bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.757295 containerd[1723]: 2026-04-21 10:08:51.702 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.196/32] ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.757295 containerd[1723]: 2026-04-21 10:08:51.702 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77830c93bd6 ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.757295 containerd[1723]: 2026-04-21 10:08:51.715 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.757295 containerd[1723]: 2026-04-21 10:08:51.718 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e7c5d3e9-033b-4479-b2b2-6500dd6b1041", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381", Pod:"coredns-66bc5c9577-wht27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77830c93bd6", MAC:"9e:4c:38:41:b3:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.757502 containerd[1723]: 2026-04-21 10:08:51.739 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381" Namespace="kube-system" Pod="coredns-66bc5c9577-wht27" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:08:51.797182 containerd[1723]: time="2026-04-21T10:08:51.794737746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:51.797182 containerd[1723]: time="2026-04-21T10:08:51.794792306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:51.797182 containerd[1723]: time="2026-04-21T10:08:51.794819746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.797182 containerd[1723]: time="2026-04-21T10:08:51.794907186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.810587 systemd[1]: Started cri-containerd-46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8.scope - libcontainer container 46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8. Apr 21 10:08:51.830666 systemd[1]: Started cri-containerd-91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381.scope - libcontainer container 91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381. Apr 21 10:08:51.842454 systemd[1]: Created slice kubepods-besteffort-pod37d9c6df_587f_4a48_845b_4ed4d350a748.slice - libcontainer container kubepods-besteffort-pod37d9c6df_587f_4a48_845b_4ed4d350a748.slice. Apr 21 10:08:51.852532 containerd[1723]: time="2026-04-21T10:08:51.852346316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-fql5l,Uid:05b48f72-8534-4b03-b630-b58ade237fcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8\"" Apr 21 10:08:51.864301 systemd-networkd[1602]: cali780558273ef: Link UP Apr 21 10:08:51.867065 systemd-networkd[1602]: cali780558273ef: Gained carrier Apr 21 10:08:51.875328 kubelet[3169]: I0421 10:08:51.875298 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d9c6df-587f-4a48-845b-4ed4d350a748-whisker-ca-bundle\") pod \"whisker-595fc866cf-qh6x4\" (UID: \"37d9c6df-587f-4a48-845b-4ed4d350a748\") " pod="calico-system/whisker-595fc866cf-qh6x4" Apr 21 10:08:51.876428 kubelet[3169]: I0421 10:08:51.876276 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/37d9c6df-587f-4a48-845b-4ed4d350a748-nginx-config\") pod \"whisker-595fc866cf-qh6x4\" (UID: \"37d9c6df-587f-4a48-845b-4ed4d350a748\") " pod="calico-system/whisker-595fc866cf-qh6x4" Apr 21 10:08:51.876428 kubelet[3169]: I0421 10:08:51.876316 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37d9c6df-587f-4a48-845b-4ed4d350a748-whisker-backend-key-pair\") pod \"whisker-595fc866cf-qh6x4\" (UID: \"37d9c6df-587f-4a48-845b-4ed4d350a748\") " pod="calico-system/whisker-595fc866cf-qh6x4" Apr 21 10:08:51.876428 kubelet[3169]: I0421 10:08:51.876333 3169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jlrx\" (UniqueName: \"kubernetes.io/projected/37d9c6df-587f-4a48-845b-4ed4d350a748-kube-api-access-8jlrx\") pod \"whisker-595fc866cf-qh6x4\" (UID: \"37d9c6df-587f-4a48-845b-4ed4d350a748\") " pod="calico-system/whisker-595fc866cf-qh6x4" Apr 21 10:08:51.902762 containerd[1723]: time="2026-04-21T10:08:51.902714815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wht27,Uid:e7c5d3e9-033b-4479-b2b2-6500dd6b1041,Namespace:kube-system,Attempt:1,} returns sandbox id \"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381\"" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.221 [ERROR][4529] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.264 [INFO][4529] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0 calico-kube-controllers-8d46b4c69- calico-system fd0b3f4f-bf33-432f-baf4-3f22428628b4 895 0 2026-04-21 10:08:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8d46b4c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf calico-kube-controllers-8d46b4c69-bmlm6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali780558273ef [] [] }} ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.265 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.458 [INFO][4604] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" HandleID="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.502 [INFO][4604] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" HandleID="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"calico-kube-controllers-8d46b4c69-bmlm6", "timestamp":"2026-04-21 10:08:51.458838236 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186dc0)} Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.502 [INFO][4604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.683 [INFO][4604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.683 [INFO][4604] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.694 [INFO][4604] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.724 [INFO][4604] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.750 [INFO][4604] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.759 [INFO][4604] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.764 [INFO][4604] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.764 [INFO][4604] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.776 [INFO][4604] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.792 [INFO][4604] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.836 [INFO][4604] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.197/26] block=192.168.69.192/26 handle="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.837 [INFO][4604] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.197/26] handle="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.837 [INFO][4604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:51.907404 containerd[1723]: 2026-04-21 10:08:51.838 [INFO][4604] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.197/26] IPv6=[] ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" HandleID="k8s-pod-network.bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.848 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0", GenerateName:"calico-kube-controllers-8d46b4c69-", Namespace:"calico-system", SelfLink:"", UID:"fd0b3f4f-bf33-432f-baf4-3f22428628b4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d46b4c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"calico-kube-controllers-8d46b4c69-bmlm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali780558273ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.848 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.197/32] ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.848 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali780558273ef ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.869 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.871 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0", GenerateName:"calico-kube-controllers-8d46b4c69-", Namespace:"calico-system", SelfLink:"", UID:"fd0b3f4f-bf33-432f-baf4-3f22428628b4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d46b4c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a", Pod:"calico-kube-controllers-8d46b4c69-bmlm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali780558273ef", MAC:"76:2b:cf:00:c1:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:51.908358 containerd[1723]: 2026-04-21 10:08:51.902 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a" Namespace="calico-system" Pod="calico-kube-controllers-8d46b4c69-bmlm6" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:08:51.913615 containerd[1723]: time="2026-04-21T10:08:51.913564602Z" level=info msg="CreateContainer within sandbox \"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:08:51.951882 containerd[1723]: time="2026-04-21T10:08:51.951765635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:51.952023 containerd[1723]: time="2026-04-21T10:08:51.951921915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:51.952023 containerd[1723]: time="2026-04-21T10:08:51.951959275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.952240 containerd[1723]: time="2026-04-21T10:08:51.952151794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:51.953285 containerd[1723]: time="2026-04-21T10:08:51.953131793Z" level=info msg="CreateContainer within sandbox \"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e509a08bb48f9bb8ead3b5324ffa464087ac7a79177af771bb58f2de3d3ee54\"" Apr 21 10:08:51.956121 containerd[1723]: time="2026-04-21T10:08:51.956088310Z" level=info msg="StartContainer for \"1e509a08bb48f9bb8ead3b5324ffa464087ac7a79177af771bb58f2de3d3ee54\"" Apr 21 10:08:51.992405 systemd[1]: Started cri-containerd-bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a.scope - libcontainer container bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a. Apr 21 10:08:52.021890 systemd[1]: Started cri-containerd-1e509a08bb48f9bb8ead3b5324ffa464087ac7a79177af771bb58f2de3d3ee54.scope - libcontainer container 1e509a08bb48f9bb8ead3b5324ffa464087ac7a79177af771bb58f2de3d3ee54. Apr 21 10:08:52.025475 containerd[1723]: time="2026-04-21T10:08:52.025434225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-787c77bcf4-qjb7k,Uid:8cfed430-eeda-4f9a-8290-98a3835b5d7c,Namespace:calico-system,Attempt:1,} returns sandbox id \"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8\"" Apr 21 10:08:52.108359 containerd[1723]: time="2026-04-21T10:08:52.108310884Z" level=info msg="StartContainer for \"1e509a08bb48f9bb8ead3b5324ffa464087ac7a79177af771bb58f2de3d3ee54\" returns successfully" Apr 21 10:08:52.119424 containerd[1723]: time="2026-04-21T10:08:52.119390030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d46b4c69-bmlm6,Uid:fd0b3f4f-bf33-432f-baf4-3f22428628b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a\"" Apr 21 10:08:52.154068 containerd[1723]: time="2026-04-21T10:08:52.154020188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595fc866cf-qh6x4,Uid:37d9c6df-587f-4a48-845b-4ed4d350a748,Namespace:calico-system,Attempt:0,}" Apr 21 10:08:52.256472 kernel: calico-node[4594]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:08:52.387782 systemd-networkd[1602]: calie9a66268ecb: Link UP Apr 21 10:08:52.389122 systemd-networkd[1602]: calie9a66268ecb: Gained carrier Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.255 [INFO][4986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0 whisker-595fc866cf- calico-system 37d9c6df-587f-4a48-845b-4ed4d350a748 929 0 2026-04-21 10:08:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:595fc866cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf whisker-595fc866cf-qh6x4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie9a66268ecb [] [] }} ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.255 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.315 [INFO][5005] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" HandleID="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.327 [INFO][5005] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" HandleID="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"whisker-595fc866cf-qh6x4", "timestamp":"2026-04-21 10:08:52.315712671 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004b7ce0)} Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.327 [INFO][5005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.328 [INFO][5005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.328 [INFO][5005] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.331 [INFO][5005] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.336 [INFO][5005] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.351 [INFO][5005] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.357 [INFO][5005] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.360 [INFO][5005] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.360 [INFO][5005] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.361 [INFO][5005] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9 Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.368 [INFO][5005] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.378 [INFO][5005] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.198/26] block=192.168.69.192/26 handle="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.378 [INFO][5005] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.198/26] handle="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.378 [INFO][5005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:08:52.408682 containerd[1723]: 2026-04-21 10:08:52.378 [INFO][5005] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.198/26] IPv6=[] ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" HandleID="k8s-pod-network.96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.380 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0", GenerateName:"whisker-595fc866cf-", Namespace:"calico-system", SelfLink:"", UID:"37d9c6df-587f-4a48-845b-4ed4d350a748", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595fc866cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"whisker-595fc866cf-qh6x4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9a66268ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.380 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.198/32] ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.380 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9a66268ecb ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.388 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.390 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0", GenerateName:"whisker-595fc866cf-", Namespace:"calico-system", SelfLink:"", UID:"37d9c6df-587f-4a48-845b-4ed4d350a748", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595fc866cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9", Pod:"whisker-595fc866cf-qh6x4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9a66268ecb", MAC:"a6:a5:a0:e2:62:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:08:52.409254 containerd[1723]: 2026-04-21 10:08:52.404 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9" Namespace="calico-system" Pod="whisker-595fc866cf-qh6x4" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--595fc866cf--qh6x4-eth0" Apr 21 10:08:52.443131 containerd[1723]: time="2026-04-21T10:08:52.442573756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:08:52.443131 containerd[1723]: time="2026-04-21T10:08:52.442660116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:08:52.443131 containerd[1723]: time="2026-04-21T10:08:52.442683716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:52.443131 containerd[1723]: time="2026-04-21T10:08:52.442793396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:08:52.479369 systemd[1]: Started cri-containerd-96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9.scope - libcontainer container 96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9. Apr 21 10:08:52.495179 kubelet[3169]: I0421 10:08:52.493077 3169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19ce9da-a445-4b47-b1e9-d94a16ff8986" path="/var/lib/kubelet/pods/c19ce9da-a445-4b47-b1e9-d94a16ff8986/volumes" Apr 21 10:08:52.519651 containerd[1723]: time="2026-04-21T10:08:52.519507822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595fc866cf-qh6x4,Uid:37d9c6df-587f-4a48-845b-4ed4d350a748,Namespace:calico-system,Attempt:0,} returns sandbox id \"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9\"" Apr 21 10:08:52.710458 kubelet[3169]: I0421 10:08:52.709456 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wht27" podStartSLOduration=39.709438631 podStartE2EDuration="39.709438631s" podCreationTimestamp="2026-04-21 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:08:52.709265351 +0000 UTC m=+46.340777822" watchObservedRunningTime="2026-04-21 10:08:52.709438631 +0000 UTC m=+46.340951102" Apr 21 10:08:52.746387 systemd-networkd[1602]: cali4cdd4e056a8: Gained IPv6LL Apr 21 10:08:52.844714 systemd-networkd[1602]: vxlan.calico: Link UP Apr 21 10:08:52.845076 systemd-networkd[1602]: vxlan.calico: Gained carrier Apr 21 10:08:52.938420 systemd-networkd[1602]: calid8d429d8a37: Gained IPv6LL Apr 21 10:08:53.450407 systemd-networkd[1602]: cali780558273ef: Gained IPv6LL Apr 21 10:08:53.514491 systemd-networkd[1602]: calie9a66268ecb: Gained IPv6LL Apr 21 10:08:53.578426 systemd-networkd[1602]: cali6e2b6fa416d: Gained IPv6LL Apr 21 10:08:53.578694 systemd-networkd[1602]: cali77830c93bd6: Gained IPv6LL Apr 21 10:08:54.074269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989512273.mount: Deactivated successfully. Apr 21 10:08:54.346505 systemd-networkd[1602]: vxlan.calico: Gained IPv6LL Apr 21 10:08:54.390182 containerd[1723]: time="2026-04-21T10:08:54.389366862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:54.393261 containerd[1723]: time="2026-04-21T10:08:54.393224457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 21 10:08:54.396644 containerd[1723]: time="2026-04-21T10:08:54.396601813Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:54.401803 containerd[1723]: time="2026-04-21T10:08:54.401736727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:54.402674 containerd[1723]: time="2026-04-21T10:08:54.402608166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.760718513s" Apr 21 10:08:54.402674 containerd[1723]: time="2026-04-21T10:08:54.402637086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 21 10:08:54.404766 containerd[1723]: time="2026-04-21T10:08:54.404675323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:08:54.410712 containerd[1723]: time="2026-04-21T10:08:54.410543756Z" level=info msg="CreateContainer within sandbox \"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:08:54.484515 containerd[1723]: time="2026-04-21T10:08:54.484469866Z" level=info msg="CreateContainer within sandbox \"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883\"" Apr 21 10:08:54.486243 containerd[1723]: time="2026-04-21T10:08:54.485428625Z" level=info msg="StartContainer for \"0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883\"" Apr 21 10:08:54.531363 systemd[1]: Started cri-containerd-0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883.scope - libcontainer container 0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883. Apr 21 10:08:54.570648 containerd[1723]: time="2026-04-21T10:08:54.570519841Z" level=info msg="StartContainer for \"0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883\" returns successfully" Apr 21 10:08:54.715623 kubelet[3169]: I0421 10:08:54.715317 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-5mhdm" podStartSLOduration=21.949508398 podStartE2EDuration="24.715292024s" podCreationTimestamp="2026-04-21 10:08:30 +0000 UTC" firstStartedPulling="2026-04-21 10:08:51.637915778 +0000 UTC m=+45.269428249" lastFinishedPulling="2026-04-21 10:08:54.403699404 +0000 UTC m=+48.035211875" observedRunningTime="2026-04-21 10:08:54.712694747 +0000 UTC m=+48.344207218" watchObservedRunningTime="2026-04-21 10:08:54.715292024 +0000 UTC m=+48.346804495" Apr 21 10:08:54.802994 systemd[1]: run-containerd-runc-k8s.io-0973fa65b43d7eb96030f2d2e11afdaa79dc1bf557609edcc7568d2ac77e2883-runc.396sZm.mount: Deactivated successfully. Apr 21 10:08:58.267559 containerd[1723]: time="2026-04-21T10:08:58.266744652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:58.271173 containerd[1723]: time="2026-04-21T10:08:58.271137767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 21 10:08:58.274866 containerd[1723]: time="2026-04-21T10:08:58.274818523Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:58.278935 containerd[1723]: time="2026-04-21T10:08:58.278893758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:58.280224 containerd[1723]: time="2026-04-21T10:08:58.279858876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.875151753s" Apr 21 10:08:58.280224 containerd[1723]: time="2026-04-21T10:08:58.279892636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 10:08:58.281329 containerd[1723]: time="2026-04-21T10:08:58.281302075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:08:58.288111 containerd[1723]: time="2026-04-21T10:08:58.288054746Z" level=info msg="CreateContainer within sandbox \"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:08:58.318161 containerd[1723]: time="2026-04-21T10:08:58.318123830Z" level=info msg="CreateContainer within sandbox \"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"436cc27c901d1e90271d0b0565526b186069773019fee531799022d13d6d8c3b\"" Apr 21 10:08:58.320289 containerd[1723]: time="2026-04-21T10:08:58.320257747Z" level=info msg="StartContainer for \"436cc27c901d1e90271d0b0565526b186069773019fee531799022d13d6d8c3b\"" Apr 21 10:08:58.354506 systemd[1]: Started cri-containerd-436cc27c901d1e90271d0b0565526b186069773019fee531799022d13d6d8c3b.scope - libcontainer container 436cc27c901d1e90271d0b0565526b186069773019fee531799022d13d6d8c3b. Apr 21 10:08:58.388629 containerd[1723]: time="2026-04-21T10:08:58.388572627Z" level=info msg="StartContainer for \"436cc27c901d1e90271d0b0565526b186069773019fee531799022d13d6d8c3b\" returns successfully" Apr 21 10:08:58.726650 containerd[1723]: time="2026-04-21T10:08:58.726610791Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:08:58.730390 containerd[1723]: time="2026-04-21T10:08:58.730363386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:08:58.733245 kubelet[3169]: I0421 10:08:58.733174 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-787c77bcf4-fql5l" podStartSLOduration=22.306228182 podStartE2EDuration="28.733154023s" podCreationTimestamp="2026-04-21 10:08:30 +0000 UTC" firstStartedPulling="2026-04-21 10:08:51.854129594 +0000 UTC m=+45.485642065" lastFinishedPulling="2026-04-21 10:08:58.281055475 +0000 UTC m=+51.912567906" observedRunningTime="2026-04-21 10:08:58.732993983 +0000 UTC m=+52.364506454" watchObservedRunningTime="2026-04-21 10:08:58.733154023 +0000 UTC m=+52.364666494" Apr 21 10:08:58.736998 containerd[1723]: time="2026-04-21T10:08:58.736860819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 455.426505ms" Apr 21 10:08:58.736998 containerd[1723]: time="2026-04-21T10:08:58.736902259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 10:08:58.738836 containerd[1723]: time="2026-04-21T10:08:58.738797976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:08:58.744380 containerd[1723]: time="2026-04-21T10:08:58.744341450Z" level=info msg="CreateContainer within sandbox \"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:08:58.779928 containerd[1723]: time="2026-04-21T10:08:58.779888728Z" level=info msg="CreateContainer within sandbox \"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ae46d23a4555fc04efc355c72427834e54c125d5a372a0affb3c623f2f5ceff0\"" Apr 21 10:08:58.782565 containerd[1723]: time="2026-04-21T10:08:58.781348886Z" level=info msg="StartContainer for \"ae46d23a4555fc04efc355c72427834e54c125d5a372a0affb3c623f2f5ceff0\"" Apr 21 10:08:58.808368 systemd[1]: Started cri-containerd-ae46d23a4555fc04efc355c72427834e54c125d5a372a0affb3c623f2f5ceff0.scope - libcontainer container ae46d23a4555fc04efc355c72427834e54c125d5a372a0affb3c623f2f5ceff0. Apr 21 10:08:58.849933 containerd[1723]: time="2026-04-21T10:08:58.849887406Z" level=info msg="StartContainer for \"ae46d23a4555fc04efc355c72427834e54c125d5a372a0affb3c623f2f5ceff0\" returns successfully" Apr 21 10:08:59.715465 kubelet[3169]: I0421 10:08:59.715437 3169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:09:00.126976 kubelet[3169]: I0421 10:09:00.126881 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-787c77bcf4-qjb7k" podStartSLOduration=23.416786074 podStartE2EDuration="30.12686087s" podCreationTimestamp="2026-04-21 10:08:30 +0000 UTC" firstStartedPulling="2026-04-21 10:08:52.027687862 +0000 UTC m=+45.659200333" lastFinishedPulling="2026-04-21 10:08:58.737762698 +0000 UTC m=+52.369275129" observedRunningTime="2026-04-21 10:08:59.733066011 +0000 UTC m=+53.364578442" watchObservedRunningTime="2026-04-21 10:09:00.12686087 +0000 UTC m=+53.758373301" Apr 21 10:09:00.926781 containerd[1723]: time="2026-04-21T10:09:00.926726652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:00.929045 containerd[1723]: time="2026-04-21T10:09:00.928900890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 21 10:09:00.931712 containerd[1723]: time="2026-04-21T10:09:00.931653407Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:00.936556 containerd[1723]: time="2026-04-21T10:09:00.936468961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:00.937482 containerd[1723]: time="2026-04-21T10:09:00.937357680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.198518184s" Apr 21 10:09:00.937482 containerd[1723]: time="2026-04-21T10:09:00.937390880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 21 10:09:00.938798 containerd[1723]: time="2026-04-21T10:09:00.938729118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:09:00.958929 containerd[1723]: time="2026-04-21T10:09:00.958882935Z" level=info msg="CreateContainer within sandbox \"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:09:00.993692 containerd[1723]: time="2026-04-21T10:09:00.993565454Z" level=info msg="CreateContainer within sandbox \"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"425a5f3d67db740babb736bf70cc7718db8bc84c98f6c804b2decdf5654b9c63\"" Apr 21 10:09:00.995335 containerd[1723]: time="2026-04-21T10:09:00.994429293Z" level=info msg="StartContainer for \"425a5f3d67db740babb736bf70cc7718db8bc84c98f6c804b2decdf5654b9c63\"" Apr 21 10:09:01.046359 systemd[1]: Started cri-containerd-425a5f3d67db740babb736bf70cc7718db8bc84c98f6c804b2decdf5654b9c63.scope - libcontainer container 425a5f3d67db740babb736bf70cc7718db8bc84c98f6c804b2decdf5654b9c63. Apr 21 10:09:02.162470 containerd[1723]: time="2026-04-21T10:09:02.162366244Z" level=info msg="StartContainer for \"425a5f3d67db740babb736bf70cc7718db8bc84c98f6c804b2decdf5654b9c63\" returns successfully" Apr 21 10:09:02.188886 kubelet[3169]: I0421 10:09:02.188726 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8d46b4c69-bmlm6" podStartSLOduration=21.375892398 podStartE2EDuration="30.188708013s" podCreationTimestamp="2026-04-21 10:08:32 +0000 UTC" firstStartedPulling="2026-04-21 10:08:52.125782183 +0000 UTC m=+45.757294654" lastFinishedPulling="2026-04-21 10:09:00.938597798 +0000 UTC m=+54.570110269" observedRunningTime="2026-04-21 10:09:02.185745297 +0000 UTC m=+55.817257768" watchObservedRunningTime="2026-04-21 10:09:02.188708013 +0000 UTC m=+55.820220484" Apr 21 10:09:02.490493 containerd[1723]: time="2026-04-21T10:09:02.490102540Z" level=info msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.540 [INFO][5468] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.541 [INFO][5468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" iface="eth0" netns="/var/run/netns/cni-9b758e26-19ad-251e-5dbc-789e5b33e10e" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.541 [INFO][5468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" iface="eth0" netns="/var/run/netns/cni-9b758e26-19ad-251e-5dbc-789e5b33e10e" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.541 [INFO][5468] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" iface="eth0" netns="/var/run/netns/cni-9b758e26-19ad-251e-5dbc-789e5b33e10e" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.541 [INFO][5468] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.541 [INFO][5468] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.563 [INFO][5475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.563 [INFO][5475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.563 [INFO][5475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.571 [WARNING][5475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.571 [INFO][5475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.572 [INFO][5475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:02.576422 containerd[1723]: 2026-04-21 10:09:02.574 [INFO][5468] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:02.578096 containerd[1723]: time="2026-04-21T10:09:02.577960637Z" level=info msg="TearDown network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" successfully" Apr 21 10:09:02.578096 containerd[1723]: time="2026-04-21T10:09:02.577995357Z" level=info msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" returns successfully" Apr 21 10:09:02.580284 systemd[1]: run-netns-cni\x2d9b758e26\x2d19ad\x2d251e\x2d5dbc\x2d789e5b33e10e.mount: Deactivated successfully. Apr 21 10:09:02.586226 containerd[1723]: time="2026-04-21T10:09:02.585868708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8mkbc,Uid:0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99,Namespace:kube-system,Attempt:1,}" Apr 21 10:09:02.723106 systemd-networkd[1602]: calia1eef1cb5b4: Link UP Apr 21 10:09:02.724797 systemd-networkd[1602]: calia1eef1cb5b4: Gained carrier Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.655 [INFO][5482] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0 coredns-66bc5c9577- kube-system 0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99 1009 0 2026-04-21 10:08:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf coredns-66bc5c9577-8mkbc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1eef1cb5b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.655 [INFO][5482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.677 [INFO][5495] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" HandleID="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.687 [INFO][5495] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" HandleID="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273280), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"coredns-66bc5c9577-8mkbc", "timestamp":"2026-04-21 10:09:02.677789 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024d1e0)} Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.688 [INFO][5495] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.688 [INFO][5495] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.688 [INFO][5495] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.690 [INFO][5495] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.693 [INFO][5495] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.697 [INFO][5495] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.699 [INFO][5495] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.701 [INFO][5495] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.701 [INFO][5495] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.702 [INFO][5495] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368 Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.708 [INFO][5495] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.717 [INFO][5495] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.199/26] block=192.168.69.192/26 handle="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.718 [INFO][5495] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.199/26] handle="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.718 [INFO][5495] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:02.758283 containerd[1723]: 2026-04-21 10:09:02.718 [INFO][5495] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.199/26] IPv6=[] ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" HandleID="k8s-pod-network.8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.760002 containerd[1723]: 2026-04-21 10:09:02.720 [INFO][5482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"coredns-66bc5c9577-8mkbc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1eef1cb5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:02.760002 containerd[1723]: 2026-04-21 10:09:02.720 [INFO][5482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.199/32] ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.760002 containerd[1723]: 2026-04-21 10:09:02.720 [INFO][5482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1eef1cb5b4 ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.760002 containerd[1723]: 2026-04-21 10:09:02.725 [INFO][5482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.760002 containerd[1723]: 2026-04-21 10:09:02.726 [INFO][5482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368", Pod:"coredns-66bc5c9577-8mkbc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1eef1cb5b4", MAC:"96:19:d0:97:84:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:02.760185 containerd[1723]: 2026-04-21 10:09:02.752 [INFO][5482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368" Namespace="kube-system" Pod="coredns-66bc5c9577-8mkbc" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:02.802835 containerd[1723]: time="2026-04-21T10:09:02.802581654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:09:02.802835 containerd[1723]: time="2026-04-21T10:09:02.802638174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:09:02.803511 containerd[1723]: time="2026-04-21T10:09:02.803107573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:02.805236 containerd[1723]: time="2026-04-21T10:09:02.803230173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:02.827393 systemd[1]: Started cri-containerd-8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368.scope - libcontainer container 8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368. Apr 21 10:09:02.861499 containerd[1723]: time="2026-04-21T10:09:02.861390105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8mkbc,Uid:0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368\"" Apr 21 10:09:02.874574 containerd[1723]: time="2026-04-21T10:09:02.874341610Z" level=info msg="CreateContainer within sandbox \"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:09:02.906892 containerd[1723]: time="2026-04-21T10:09:02.906754692Z" level=info msg="CreateContainer within sandbox \"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc6ab7d9e89e45b6437d925361f83f6dc54186408d1045febfb1ca54f3ab6623\"" Apr 21 10:09:02.908369 containerd[1723]: time="2026-04-21T10:09:02.908335690Z" level=info msg="StartContainer for \"cc6ab7d9e89e45b6437d925361f83f6dc54186408d1045febfb1ca54f3ab6623\"" Apr 21 10:09:02.936375 systemd[1]: Started cri-containerd-cc6ab7d9e89e45b6437d925361f83f6dc54186408d1045febfb1ca54f3ab6623.scope - libcontainer container cc6ab7d9e89e45b6437d925361f83f6dc54186408d1045febfb1ca54f3ab6623. Apr 21 10:09:02.969590 containerd[1723]: time="2026-04-21T10:09:02.969545978Z" level=info msg="StartContainer for \"cc6ab7d9e89e45b6437d925361f83f6dc54186408d1045febfb1ca54f3ab6623\" returns successfully" Apr 21 10:09:03.190400 kubelet[3169]: I0421 10:09:03.189618 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8mkbc" podStartSLOduration=50.18960008 podStartE2EDuration="50.18960008s" podCreationTimestamp="2026-04-21 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:09:03.189167561 +0000 UTC m=+56.820680032" watchObservedRunningTime="2026-04-21 10:09:03.18960008 +0000 UTC m=+56.821112551" Apr 21 10:09:04.331349 systemd-networkd[1602]: calia1eef1cb5b4: Gained IPv6LL Apr 21 10:09:04.429085 containerd[1723]: time="2026-04-21T10:09:04.429029948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:04.433562 containerd[1723]: time="2026-04-21T10:09:04.433511063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 21 10:09:04.437458 containerd[1723]: time="2026-04-21T10:09:04.437423978Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:04.444402 containerd[1723]: time="2026-04-21T10:09:04.444338930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:04.445645 containerd[1723]: time="2026-04-21T10:09:04.445101329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 3.506154211s" Apr 21 10:09:04.445645 containerd[1723]: time="2026-04-21T10:09:04.445134129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 21 10:09:04.452650 containerd[1723]: time="2026-04-21T10:09:04.452609920Z" level=info msg="CreateContainer within sandbox \"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:09:04.499812 containerd[1723]: time="2026-04-21T10:09:04.499718265Z" level=info msg="CreateContainer within sandbox \"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4aeabc6b348e99373fa8adb46c9b6d70bc2a3cd75b4ebb7be7eb60387c1cf6e7\"" Apr 21 10:09:04.501147 containerd[1723]: time="2026-04-21T10:09:04.500761424Z" level=info msg="StartContainer for \"4aeabc6b348e99373fa8adb46c9b6d70bc2a3cd75b4ebb7be7eb60387c1cf6e7\"" Apr 21 10:09:04.535418 systemd[1]: Started cri-containerd-4aeabc6b348e99373fa8adb46c9b6d70bc2a3cd75b4ebb7be7eb60387c1cf6e7.scope - libcontainer container 4aeabc6b348e99373fa8adb46c9b6d70bc2a3cd75b4ebb7be7eb60387c1cf6e7. Apr 21 10:09:04.569465 containerd[1723]: time="2026-04-21T10:09:04.569177664Z" level=info msg="StartContainer for \"4aeabc6b348e99373fa8adb46c9b6d70bc2a3cd75b4ebb7be7eb60387c1cf6e7\" returns successfully" Apr 21 10:09:04.571561 containerd[1723]: time="2026-04-21T10:09:04.571330941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:09:05.488426 containerd[1723]: time="2026-04-21T10:09:05.488364786Z" level=info msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.538 [INFO][5683] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.538 [INFO][5683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" iface="eth0" netns="/var/run/netns/cni-cde47ae9-de6e-2bf4-ab9f-3ae42f3b9122" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.539 [INFO][5683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" iface="eth0" netns="/var/run/netns/cni-cde47ae9-de6e-2bf4-ab9f-3ae42f3b9122" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.539 [INFO][5683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" iface="eth0" netns="/var/run/netns/cni-cde47ae9-de6e-2bf4-ab9f-3ae42f3b9122" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.539 [INFO][5683] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.539 [INFO][5683] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.562 [INFO][5690] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.563 [INFO][5690] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.563 [INFO][5690] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.571 [WARNING][5690] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.571 [INFO][5690] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.573 [INFO][5690] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:05.576757 containerd[1723]: 2026-04-21 10:09:05.574 [INFO][5683] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:05.577582 containerd[1723]: time="2026-04-21T10:09:05.577543482Z" level=info msg="TearDown network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" successfully" Apr 21 10:09:05.577582 containerd[1723]: time="2026-04-21T10:09:05.577579482Z" level=info msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" returns successfully" Apr 21 10:09:05.580911 systemd[1]: run-netns-cni\x2dcde47ae9\x2dde6e\x2d2bf4\x2dab9f\x2d3ae42f3b9122.mount: Deactivated successfully. Apr 21 10:09:05.585527 containerd[1723]: time="2026-04-21T10:09:05.585494313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvhct,Uid:ca3b0e51-e445-4caa-9b05-d450087178fc,Namespace:calico-system,Attempt:1,}" Apr 21 10:09:05.742784 systemd-networkd[1602]: cali6d714980bfa: Link UP Apr 21 10:09:05.742912 systemd-networkd[1602]: cali6d714980bfa: Gained carrier Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.669 [INFO][5696] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0 csi-node-driver- calico-system ca3b0e51-e445-4caa-9b05-d450087178fc 1038 0 2026-04-21 10:08:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.7-a-75af1c63bf csi-node-driver-qvhct eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6d714980bfa [] [] }} ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.669 [INFO][5696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.696 [INFO][5709] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" HandleID="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.706 [INFO][5709] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" HandleID="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-75af1c63bf", "pod":"csi-node-driver-qvhct", "timestamp":"2026-04-21 10:09:05.696235223 +0000 UTC"}, Hostname:"ci-4081.3.7-a-75af1c63bf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c7080)} Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.706 [INFO][5709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.706 [INFO][5709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.706 [INFO][5709] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-75af1c63bf' Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.708 [INFO][5709] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.713 [INFO][5709] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.716 [INFO][5709] ipam/ipam.go 526: Trying affinity for 192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.718 [INFO][5709] ipam/ipam.go 160: Attempting to load block cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.720 [INFO][5709] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.720 [INFO][5709] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.722 [INFO][5709] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.729 [INFO][5709] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.737 [INFO][5709] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.69.200/26] block=192.168.69.192/26 handle="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.737 [INFO][5709] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.69.200/26] handle="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" host="ci-4081.3.7-a-75af1c63bf" Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.737 [INFO][5709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:05.763580 containerd[1723]: 2026-04-21 10:09:05.737 [INFO][5709] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.69.200/26] IPv6=[] ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" HandleID="k8s-pod-network.bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.740 [INFO][5696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca3b0e51-e445-4caa-9b05-d450087178fc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"", Pod:"csi-node-driver-qvhct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d714980bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.741 [INFO][5696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.200/32] ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.741 [INFO][5696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d714980bfa ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.746 [INFO][5696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.748 [INFO][5696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca3b0e51-e445-4caa-9b05-d450087178fc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f", Pod:"csi-node-driver-qvhct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d714980bfa", MAC:"5a:fe:3e:f1:86:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:05.765042 containerd[1723]: 2026-04-21 10:09:05.759 [INFO][5696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f" Namespace="calico-system" Pod="csi-node-driver-qvhct" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:05.786129 containerd[1723]: time="2026-04-21T10:09:05.785431998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:09:05.786129 containerd[1723]: time="2026-04-21T10:09:05.785499398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:09:05.786129 containerd[1723]: time="2026-04-21T10:09:05.785517878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:05.786129 containerd[1723]: time="2026-04-21T10:09:05.785592278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:05.813365 systemd[1]: Started cri-containerd-bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f.scope - libcontainer container bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f. Apr 21 10:09:05.850422 containerd[1723]: time="2026-04-21T10:09:05.850315242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvhct,Uid:ca3b0e51-e445-4caa-9b05-d450087178fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f\"" Apr 21 10:09:06.177789 containerd[1723]: time="2026-04-21T10:09:06.176932580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:06.180241 containerd[1723]: time="2026-04-21T10:09:06.180192896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 21 10:09:06.183708 containerd[1723]: time="2026-04-21T10:09:06.183648852Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:06.189308 containerd[1723]: time="2026-04-21T10:09:06.189239005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:06.190225 containerd[1723]: time="2026-04-21T10:09:06.190174524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.618808263s" Apr 21 10:09:06.190285 containerd[1723]: time="2026-04-21T10:09:06.190223204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 21 10:09:06.192690 containerd[1723]: time="2026-04-21T10:09:06.192638641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:09:06.198377 containerd[1723]: time="2026-04-21T10:09:06.198348434Z" level=info msg="CreateContainer within sandbox \"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:09:06.228033 containerd[1723]: time="2026-04-21T10:09:06.227970760Z" level=info msg="CreateContainer within sandbox \"96472592619474d3206896271269e1c15073320e85629763e7659db2e2a0a0e9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c3e91a1418a91151a2ac04958cd3527478cd8903c4caafc6b67a461aa464832d\"" Apr 21 10:09:06.228721 containerd[1723]: time="2026-04-21T10:09:06.228494319Z" level=info msg="StartContainer for \"c3e91a1418a91151a2ac04958cd3527478cd8903c4caafc6b67a461aa464832d\"" Apr 21 10:09:06.260545 systemd[1]: Started cri-containerd-c3e91a1418a91151a2ac04958cd3527478cd8903c4caafc6b67a461aa464832d.scope - libcontainer container c3e91a1418a91151a2ac04958cd3527478cd8903c4caafc6b67a461aa464832d. Apr 21 10:09:06.296184 containerd[1723]: time="2026-04-21T10:09:06.296027720Z" level=info msg="StartContainer for \"c3e91a1418a91151a2ac04958cd3527478cd8903c4caafc6b67a461aa464832d\" returns successfully" Apr 21 10:09:06.492207 containerd[1723]: time="2026-04-21T10:09:06.492060805Z" level=info msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.533 [WARNING][5832] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368", Pod:"coredns-66bc5c9577-8mkbc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1eef1cb5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.533 [INFO][5832] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.533 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" iface="eth0" netns="" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.533 [INFO][5832] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.533 [INFO][5832] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.560 [INFO][5839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.560 [INFO][5839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.560 [INFO][5839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.570 [WARNING][5839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.570 [INFO][5839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.571 [INFO][5839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.575722 containerd[1723]: 2026-04-21 10:09:06.573 [INFO][5832] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.575722 containerd[1723]: time="2026-04-21T10:09:06.575414424Z" level=info msg="TearDown network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" successfully" Apr 21 10:09:06.575722 containerd[1723]: time="2026-04-21T10:09:06.575437624Z" level=info msg="StopPodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" returns successfully" Apr 21 10:09:06.576179 containerd[1723]: time="2026-04-21T10:09:06.575955984Z" level=info msg="RemovePodSandbox for \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" Apr 21 10:09:06.591839 containerd[1723]: time="2026-04-21T10:09:06.591743005Z" level=info msg="Forcibly stopping sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\"" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.627 [WARNING][5854] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0c4c4f7a-7f93-41ec-8ebb-ecaa56e9cc99", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"8c9597f9089fdf8a871a78fe138c9485a8bde003ca92b186ea763865639c0368", Pod:"coredns-66bc5c9577-8mkbc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1eef1cb5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.627 [INFO][5854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.627 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" iface="eth0" netns="" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.627 [INFO][5854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.627 [INFO][5854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.649 [INFO][5862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.649 [INFO][5862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.649 [INFO][5862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.663 [WARNING][5862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.663 [INFO][5862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" HandleID="k8s-pod-network.47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--8mkbc-eth0" Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.665 [INFO][5862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.671757 containerd[1723]: 2026-04-21 10:09:06.667 [INFO][5854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037" Apr 21 10:09:06.671757 containerd[1723]: time="2026-04-21T10:09:06.671475429Z" level=info msg="TearDown network for sandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" successfully" Apr 21 10:09:06.692228 containerd[1723]: time="2026-04-21T10:09:06.691965284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:06.692228 containerd[1723]: time="2026-04-21T10:09:06.692065484Z" level=info msg="RemovePodSandbox \"47542b8f7f437aeaedc2ec20f3cbe0d5976a929294b3dc91bb94e3a86797f037\" returns successfully" Apr 21 10:09:06.692609 containerd[1723]: time="2026-04-21T10:09:06.692586043Z" level=info msg="StopPodSandbox for \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\"" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.729 [WARNING][5879] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0", GenerateName:"calico-kube-controllers-8d46b4c69-", Namespace:"calico-system", SelfLink:"", UID:"fd0b3f4f-bf33-432f-baf4-3f22428628b4", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d46b4c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a", Pod:"calico-kube-controllers-8d46b4c69-bmlm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali780558273ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.729 [INFO][5879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.729 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" iface="eth0" netns="" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.729 [INFO][5879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.729 [INFO][5879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.749 [INFO][5886] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.749 [INFO][5886] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.749 [INFO][5886] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.757 [WARNING][5886] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.757 [INFO][5886] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.759 [INFO][5886] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.762928 containerd[1723]: 2026-04-21 10:09:06.760 [INFO][5879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.762928 containerd[1723]: time="2026-04-21T10:09:06.762902118Z" level=info msg="TearDown network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" successfully" Apr 21 10:09:06.762928 containerd[1723]: time="2026-04-21T10:09:06.762926958Z" level=info msg="StopPodSandbox for \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" returns successfully" Apr 21 10:09:06.764363 containerd[1723]: time="2026-04-21T10:09:06.763783877Z" level=info msg="RemovePodSandbox for \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\"" Apr 21 10:09:06.764363 containerd[1723]: time="2026-04-21T10:09:06.763815837Z" level=info msg="Forcibly stopping sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\"" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.795 [WARNING][5900] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0", GenerateName:"calico-kube-controllers-8d46b4c69-", Namespace:"calico-system", SelfLink:"", UID:"fd0b3f4f-bf33-432f-baf4-3f22428628b4", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d46b4c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb0ccbd2bf1280ee827995ff30293244c00b51832e3171e27523cef691d1646a", Pod:"calico-kube-controllers-8d46b4c69-bmlm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali780558273ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.795 [INFO][5900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.795 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" iface="eth0" netns="" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.795 [INFO][5900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.795 [INFO][5900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.815 [INFO][5907] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.816 [INFO][5907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.816 [INFO][5907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.824 [WARNING][5907] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.824 [INFO][5907] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" HandleID="k8s-pod-network.dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--kube--controllers--8d46b4c69--bmlm6-eth0" Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.825 [INFO][5907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.829284 containerd[1723]: 2026-04-21 10:09:06.827 [INFO][5900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd" Apr 21 10:09:06.829686 containerd[1723]: time="2026-04-21T10:09:06.829323238Z" level=info msg="TearDown network for sandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" successfully" Apr 21 10:09:06.836044 containerd[1723]: time="2026-04-21T10:09:06.835993310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:06.836123 containerd[1723]: time="2026-04-21T10:09:06.836111590Z" level=info msg="RemovePodSandbox \"dea28539263f509f779d3cf212a86ce3b87f458253646acabe627d8c209d0ecd\" returns successfully" Apr 21 10:09:06.836664 containerd[1723]: time="2026-04-21T10:09:06.836638950Z" level=info msg="StopPodSandbox for \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\"" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.868 [WARNING][5921] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"098e608f-9fb3-48c3-ba27-9bbae9770798", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88", Pod:"goldmane-cccfbd5cf-5mhdm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8d429d8a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.868 [INFO][5921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.868 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" iface="eth0" netns="" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.868 [INFO][5921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.868 [INFO][5921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.888 [INFO][5928] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.889 [INFO][5928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.889 [INFO][5928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.897 [WARNING][5928] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.897 [INFO][5928] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.898 [INFO][5928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.902335 containerd[1723]: 2026-04-21 10:09:06.900 [INFO][5921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.902838 containerd[1723]: time="2026-04-21T10:09:06.902374590Z" level=info msg="TearDown network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" successfully" Apr 21 10:09:06.902838 containerd[1723]: time="2026-04-21T10:09:06.902398870Z" level=info msg="StopPodSandbox for \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" returns successfully" Apr 21 10:09:06.903654 containerd[1723]: time="2026-04-21T10:09:06.903343589Z" level=info msg="RemovePodSandbox for \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\"" Apr 21 10:09:06.903654 containerd[1723]: time="2026-04-21T10:09:06.903383789Z" level=info msg="Forcibly stopping sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\"" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.938 [WARNING][5942] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"098e608f-9fb3-48c3-ba27-9bbae9770798", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"755e5469af6a7d8ae05de7042801faecd57c4120364b9d52ef7f880d8429db88", Pod:"goldmane-cccfbd5cf-5mhdm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid8d429d8a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.939 [INFO][5942] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.939 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" iface="eth0" netns="" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.939 [INFO][5942] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.939 [INFO][5942] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.958 [INFO][5949] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.958 [INFO][5949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.958 [INFO][5949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.966 [WARNING][5949] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.966 [INFO][5949] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" HandleID="k8s-pod-network.298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Workload="ci--4081.3.7--a--75af1c63bf-k8s-goldmane--cccfbd5cf--5mhdm-eth0" Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.967 [INFO][5949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:06.971400 containerd[1723]: 2026-04-21 10:09:06.969 [INFO][5942] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736" Apr 21 10:09:06.971400 containerd[1723]: time="2026-04-21T10:09:06.971256787Z" level=info msg="TearDown network for sandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" successfully" Apr 21 10:09:06.977241 containerd[1723]: time="2026-04-21T10:09:06.977192380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:06.977325 containerd[1723]: time="2026-04-21T10:09:06.977282940Z" level=info msg="RemovePodSandbox \"298715f7c25932d684d57051fce2b7cbc5d4747f57770c0be423c82cc2791736\" returns successfully" Apr 21 10:09:06.977990 containerd[1723]: time="2026-04-21T10:09:06.977703779Z" level=info msg="StopPodSandbox for \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\"" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.028 [WARNING][5963] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"8cfed430-eeda-4f9a-8290-98a3835b5d7c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8", Pod:"calico-apiserver-787c77bcf4-qjb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e2b6fa416d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.028 [INFO][5963] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.028 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" iface="eth0" netns="" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.028 [INFO][5963] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.028 [INFO][5963] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.048 [INFO][5971] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.048 [INFO][5971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.048 [INFO][5971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.058 [WARNING][5971] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.058 [INFO][5971] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.059 [INFO][5971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.063222 containerd[1723]: 2026-04-21 10:09:07.061 [INFO][5963] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.063222 containerd[1723]: time="2026-04-21T10:09:07.063173156Z" level=info msg="TearDown network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" successfully" Apr 21 10:09:07.063222 containerd[1723]: time="2026-04-21T10:09:07.063208636Z" level=info msg="StopPodSandbox for \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" returns successfully" Apr 21 10:09:07.063701 containerd[1723]: time="2026-04-21T10:09:07.063642116Z" level=info msg="RemovePodSandbox for \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\"" Apr 21 10:09:07.063701 containerd[1723]: time="2026-04-21T10:09:07.063671196Z" level=info msg="Forcibly stopping sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\"" Apr 21 10:09:07.082455 systemd-networkd[1602]: cali6d714980bfa: Gained IPv6LL Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.100 [WARNING][5985] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"8cfed430-eeda-4f9a-8290-98a3835b5d7c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"46f646f82e10eb186c5ea5352555b899db6eb7059ae4f4c7ecb561ec25ac4ae8", Pod:"calico-apiserver-787c77bcf4-qjb7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e2b6fa416d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.100 [INFO][5985] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.100 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" iface="eth0" netns="" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.101 [INFO][5985] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.101 [INFO][5985] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.122 [INFO][5996] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.122 [INFO][5996] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.122 [INFO][5996] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.130 [WARNING][5996] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.130 [INFO][5996] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" HandleID="k8s-pod-network.fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--qjb7k-eth0" Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.132 [INFO][5996] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.135378 containerd[1723]: 2026-04-21 10:09:07.133 [INFO][5985] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985" Apr 21 10:09:07.135779 containerd[1723]: time="2026-04-21T10:09:07.135429269Z" level=info msg="TearDown network for sandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" successfully" Apr 21 10:09:07.142311 containerd[1723]: time="2026-04-21T10:09:07.142255981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:07.142411 containerd[1723]: time="2026-04-21T10:09:07.142378301Z" level=info msg="RemovePodSandbox \"fea0782a257a66d5c1e4faa8b4a4042149bbe64c27ffdb808f15f706f0112985\" returns successfully" Apr 21 10:09:07.142893 containerd[1723]: time="2026-04-21T10:09:07.142868020Z" level=info msg="StopPodSandbox for \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\"" Apr 21 10:09:07.216353 kubelet[3169]: I0421 10:09:07.216285 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-595fc866cf-qh6x4" podStartSLOduration=2.5465843489999997 podStartE2EDuration="16.216267532s" podCreationTimestamp="2026-04-21 10:08:51 +0000 UTC" firstStartedPulling="2026-04-21 10:08:52.5216355 +0000 UTC m=+46.153147971" lastFinishedPulling="2026-04-21 10:09:06.191318723 +0000 UTC m=+59.822831154" observedRunningTime="2026-04-21 10:09:07.213393375 +0000 UTC m=+60.844905846" watchObservedRunningTime="2026-04-21 10:09:07.216267532 +0000 UTC m=+60.847780003" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.177 [WARNING][6011] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"05b48f72-8534-4b03-b630-b58ade237fcc", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8", Pod:"calico-apiserver-787c77bcf4-fql5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4cdd4e056a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.177 [INFO][6011] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.177 [INFO][6011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" iface="eth0" netns="" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.177 [INFO][6011] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.177 [INFO][6011] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.205 [INFO][6018] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.205 [INFO][6018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.205 [INFO][6018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.216 [WARNING][6018] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.216 [INFO][6018] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.223 [INFO][6018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.229054 containerd[1723]: 2026-04-21 10:09:07.226 [INFO][6011] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.229486 containerd[1723]: time="2026-04-21T10:09:07.229109116Z" level=info msg="TearDown network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" successfully" Apr 21 10:09:07.229486 containerd[1723]: time="2026-04-21T10:09:07.229144436Z" level=info msg="StopPodSandbox for \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" returns successfully" Apr 21 10:09:07.229977 containerd[1723]: time="2026-04-21T10:09:07.229897275Z" level=info msg="RemovePodSandbox for \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\"" Apr 21 10:09:07.229977 containerd[1723]: time="2026-04-21T10:09:07.229931515Z" level=info msg="Forcibly stopping sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\"" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.269 [WARNING][6033] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0", GenerateName:"calico-apiserver-787c77bcf4-", Namespace:"calico-system", SelfLink:"", UID:"05b48f72-8534-4b03-b630-b58ade237fcc", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"787c77bcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"24e8bab806abdd6a5cc1c0822e14388115ea598fd17cb84a2d0d7a27953c96a8", Pod:"calico-apiserver-787c77bcf4-fql5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4cdd4e056a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.270 [INFO][6033] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.270 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" iface="eth0" netns="" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.270 [INFO][6033] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.270 [INFO][6033] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.290 [INFO][6041] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.290 [INFO][6041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.290 [INFO][6041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.299 [WARNING][6041] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.299 [INFO][6041] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" HandleID="k8s-pod-network.64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Workload="ci--4081.3.7--a--75af1c63bf-k8s-calico--apiserver--787c77bcf4--fql5l-eth0" Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.300 [INFO][6041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.303915 containerd[1723]: 2026-04-21 10:09:07.302 [INFO][6033] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c" Apr 21 10:09:07.304936 containerd[1723]: time="2026-04-21T10:09:07.303949746Z" level=info msg="TearDown network for sandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" successfully" Apr 21 10:09:07.311073 containerd[1723]: time="2026-04-21T10:09:07.311034378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:07.311406 containerd[1723]: time="2026-04-21T10:09:07.311303217Z" level=info msg="RemovePodSandbox \"64f34e244037d7dadae86f2bcc4267d7b0615555adefd70e92718b3d3909fd8c\" returns successfully" Apr 21 10:09:07.311824 containerd[1723]: time="2026-04-21T10:09:07.311799577Z" level=info msg="StopPodSandbox for \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\"" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.347 [WARNING][6055] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e7c5d3e9-033b-4479-b2b2-6500dd6b1041", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381", Pod:"coredns-66bc5c9577-wht27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77830c93bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.347 [INFO][6055] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.347 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" iface="eth0" netns="" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.347 [INFO][6055] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.347 [INFO][6055] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.367 [INFO][6063] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.367 [INFO][6063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.367 [INFO][6063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.376 [WARNING][6063] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.376 [INFO][6063] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.378 [INFO][6063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.382739 containerd[1723]: 2026-04-21 10:09:07.380 [INFO][6055] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.382739 containerd[1723]: time="2026-04-21T10:09:07.382711851Z" level=info msg="TearDown network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" successfully" Apr 21 10:09:07.382739 containerd[1723]: time="2026-04-21T10:09:07.382738771Z" level=info msg="StopPodSandbox for \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" returns successfully" Apr 21 10:09:07.383796 containerd[1723]: time="2026-04-21T10:09:07.383165131Z" level=info msg="RemovePodSandbox for \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\"" Apr 21 10:09:07.383796 containerd[1723]: time="2026-04-21T10:09:07.383192651Z" level=info msg="Forcibly stopping sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\"" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.418 [WARNING][6077] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e7c5d3e9-033b-4479-b2b2-6500dd6b1041", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"91feab99e98f4b4bba22e80c8dbc0de2da6972791a48f825d341ce598e2b0381", Pod:"coredns-66bc5c9577-wht27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77830c93bd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.418 [INFO][6077] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.418 [INFO][6077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" iface="eth0" netns="" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.418 [INFO][6077] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.418 [INFO][6077] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.436 [INFO][6085] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.436 [INFO][6085] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.436 [INFO][6085] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.445 [WARNING][6085] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.445 [INFO][6085] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" HandleID="k8s-pod-network.f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Workload="ci--4081.3.7--a--75af1c63bf-k8s-coredns--66bc5c9577--wht27-eth0" Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.446 [INFO][6085] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.450188 containerd[1723]: 2026-04-21 10:09:07.448 [INFO][6077] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427" Apr 21 10:09:07.450728 containerd[1723]: time="2026-04-21T10:09:07.450247690Z" level=info msg="TearDown network for sandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" successfully" Apr 21 10:09:07.459835 containerd[1723]: time="2026-04-21T10:09:07.459782558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:07.460130 containerd[1723]: time="2026-04-21T10:09:07.459860958Z" level=info msg="RemovePodSandbox \"f09e1a1849185d541e0c12cd4014a406da7fe3ee1dd020cbc5790f3fbfbd7427\" returns successfully" Apr 21 10:09:07.460645 containerd[1723]: time="2026-04-21T10:09:07.460365997Z" level=info msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.493 [WARNING][6099] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca3b0e51-e445-4caa-9b05-d450087178fc", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f", Pod:"csi-node-driver-qvhct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d714980bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.494 [INFO][6099] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.494 [INFO][6099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" iface="eth0" netns="" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.494 [INFO][6099] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.494 [INFO][6099] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.515 [INFO][6106] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.515 [INFO][6106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.515 [INFO][6106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.524 [WARNING][6106] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.524 [INFO][6106] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.525 [INFO][6106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.529351 containerd[1723]: 2026-04-21 10:09:07.527 [INFO][6099] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.530760 containerd[1723]: time="2026-04-21T10:09:07.530040113Z" level=info msg="TearDown network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" successfully" Apr 21 10:09:07.530760 containerd[1723]: time="2026-04-21T10:09:07.530067793Z" level=info msg="StopPodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" returns successfully" Apr 21 10:09:07.530760 containerd[1723]: time="2026-04-21T10:09:07.530493233Z" level=info msg="RemovePodSandbox for \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" Apr 21 10:09:07.530760 containerd[1723]: time="2026-04-21T10:09:07.530520673Z" level=info msg="Forcibly stopping sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\"" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.573 [WARNING][6121] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca3b0e51-e445-4caa-9b05-d450087178fc", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-75af1c63bf", ContainerID:"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f", Pod:"csi-node-driver-qvhct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d714980bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.573 [INFO][6121] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.573 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" iface="eth0" netns="" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.573 [INFO][6121] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.573 [INFO][6121] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.596 [INFO][6128] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.596 [INFO][6128] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.596 [INFO][6128] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.606 [WARNING][6128] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.606 [INFO][6128] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" HandleID="k8s-pod-network.29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Workload="ci--4081.3.7--a--75af1c63bf-k8s-csi--node--driver--qvhct-eth0" Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.608 [INFO][6128] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.613598 containerd[1723]: 2026-04-21 10:09:07.610 [INFO][6121] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85" Apr 21 10:09:07.614004 containerd[1723]: time="2026-04-21T10:09:07.613629653Z" level=info msg="TearDown network for sandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" successfully" Apr 21 10:09:07.621182 containerd[1723]: time="2026-04-21T10:09:07.620678284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:07.621182 containerd[1723]: time="2026-04-21T10:09:07.620752804Z" level=info msg="RemovePodSandbox \"29adabfa4ab6e479b4edd2daa96a10182184892439cbb9f51307479558745c85\" returns successfully" Apr 21 10:09:07.621511 containerd[1723]: time="2026-04-21T10:09:07.621441643Z" level=info msg="StopPodSandbox for \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\"" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.683 [WARNING][6146] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.683 [INFO][6146] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.683 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" iface="eth0" netns="" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.683 [INFO][6146] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.683 [INFO][6146] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.705 [INFO][6153] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.705 [INFO][6153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.705 [INFO][6153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.718 [WARNING][6153] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.718 [INFO][6153] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.721 [INFO][6153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.725258 containerd[1723]: 2026-04-21 10:09:07.722 [INFO][6146] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.725258 containerd[1723]: time="2026-04-21T10:09:07.725047838Z" level=info msg="TearDown network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" successfully" Apr 21 10:09:07.725258 containerd[1723]: time="2026-04-21T10:09:07.725072838Z" level=info msg="StopPodSandbox for \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" returns successfully" Apr 21 10:09:07.758407 containerd[1723]: time="2026-04-21T10:09:07.726128437Z" level=info msg="RemovePodSandbox for \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\"" Apr 21 10:09:07.758407 containerd[1723]: time="2026-04-21T10:09:07.726159277Z" level=info msg="Forcibly stopping sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\"" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.771 [WARNING][6168] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" WorkloadEndpoint="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.771 [INFO][6168] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.771 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" iface="eth0" netns="" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.771 [INFO][6168] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.771 [INFO][6168] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.789 [INFO][6175] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.790 [INFO][6175] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.790 [INFO][6175] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.798 [WARNING][6175] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.798 [INFO][6175] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" HandleID="k8s-pod-network.4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Workload="ci--4081.3.7--a--75af1c63bf-k8s-whisker--766d5c7cc4--7z5lm-eth0" Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.799 [INFO][6175] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:09:07.802923 containerd[1723]: 2026-04-21 10:09:07.801 [INFO][6168] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97" Apr 21 10:09:07.804283 containerd[1723]: time="2026-04-21T10:09:07.803015344Z" level=info msg="TearDown network for sandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" successfully" Apr 21 10:09:07.827570 containerd[1723]: time="2026-04-21T10:09:07.827523035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:09:07.827836 containerd[1723]: time="2026-04-21T10:09:07.827817794Z" level=info msg="RemovePodSandbox \"4c87a646b22bdd9c7b4c3d7b8fc1d530516262c183772a421fe5cb69cac54b97\" returns successfully" Apr 21 10:09:07.837667 containerd[1723]: time="2026-04-21T10:09:07.837624823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 21 10:09:07.839104 containerd[1723]: time="2026-04-21T10:09:07.839075541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:07.839970 containerd[1723]: time="2026-04-21T10:09:07.839939260Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:07.842048 containerd[1723]: time="2026-04-21T10:09:07.842022617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:07.842954 containerd[1723]: time="2026-04-21T10:09:07.842927056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.650107935s" Apr 21 10:09:07.843139 containerd[1723]: time="2026-04-21T10:09:07.843049896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 21 10:09:07.850796 containerd[1723]: time="2026-04-21T10:09:07.850760767Z" level=info msg="CreateContainer within sandbox \"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:09:07.877175 containerd[1723]: time="2026-04-21T10:09:07.877130975Z" level=info msg="CreateContainer within sandbox \"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"57a57cf31f351f3ce99e0fa03a75f4afeedadba855c0098e4df95abd118aae27\"" Apr 21 10:09:07.877929 containerd[1723]: time="2026-04-21T10:09:07.877901134Z" level=info msg="StartContainer for \"57a57cf31f351f3ce99e0fa03a75f4afeedadba855c0098e4df95abd118aae27\"" Apr 21 10:09:07.913354 systemd[1]: Started cri-containerd-57a57cf31f351f3ce99e0fa03a75f4afeedadba855c0098e4df95abd118aae27.scope - libcontainer container 57a57cf31f351f3ce99e0fa03a75f4afeedadba855c0098e4df95abd118aae27. Apr 21 10:09:07.942273 containerd[1723]: time="2026-04-21T10:09:07.942130457Z" level=info msg="StartContainer for \"57a57cf31f351f3ce99e0fa03a75f4afeedadba855c0098e4df95abd118aae27\" returns successfully" Apr 21 10:09:07.943930 containerd[1723]: time="2026-04-21T10:09:07.943420695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:09:09.386940 containerd[1723]: time="2026-04-21T10:09:09.386898475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:09.389771 containerd[1723]: time="2026-04-21T10:09:09.389729871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 21 10:09:09.393928 containerd[1723]: time="2026-04-21T10:09:09.392778948Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:09.396731 containerd[1723]: time="2026-04-21T10:09:09.396701543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:09.397406 containerd[1723]: time="2026-04-21T10:09:09.397360342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.453907727s" Apr 21 10:09:09.397468 containerd[1723]: time="2026-04-21T10:09:09.397405422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 21 10:09:09.404877 containerd[1723]: time="2026-04-21T10:09:09.404839293Z" level=info msg="CreateContainer within sandbox \"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:09:09.442951 containerd[1723]: time="2026-04-21T10:09:09.442903607Z" level=info msg="CreateContainer within sandbox \"bb596a6d789832b56441cfb8297a5e8eaf4d2d85d08308d52654430b34534b9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e0698c1cbd4bb80c06ad61350935945fea5ef67b190828bd46fecf5ba9a222ac\"" Apr 21 10:09:09.443868 containerd[1723]: time="2026-04-21T10:09:09.443689046Z" level=info msg="StartContainer for \"e0698c1cbd4bb80c06ad61350935945fea5ef67b190828bd46fecf5ba9a222ac\"" Apr 21 10:09:09.477362 systemd[1]: Started cri-containerd-e0698c1cbd4bb80c06ad61350935945fea5ef67b190828bd46fecf5ba9a222ac.scope - libcontainer container e0698c1cbd4bb80c06ad61350935945fea5ef67b190828bd46fecf5ba9a222ac. Apr 21 10:09:09.507974 containerd[1723]: time="2026-04-21T10:09:09.507928169Z" level=info msg="StartContainer for \"e0698c1cbd4bb80c06ad61350935945fea5ef67b190828bd46fecf5ba9a222ac\" returns successfully" Apr 21 10:09:09.572049 kubelet[3169]: I0421 10:09:09.572016 3169 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:09:09.576691 kubelet[3169]: I0421 10:09:09.576660 3169 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:09:10.227582 kubelet[3169]: I0421 10:09:10.226849 3169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qvhct" podStartSLOduration=34.680192242 podStartE2EDuration="38.226833622s" podCreationTimestamp="2026-04-21 10:08:32 +0000 UTC" firstStartedPulling="2026-04-21 10:09:05.851808001 +0000 UTC m=+59.483320472" lastFinishedPulling="2026-04-21 10:09:09.398449381 +0000 UTC m=+63.029961852" observedRunningTime="2026-04-21 10:09:10.226526782 +0000 UTC m=+63.858039253" watchObservedRunningTime="2026-04-21 10:09:10.226833622 +0000 UTC m=+63.858346093" Apr 21 10:09:27.629421 kubelet[3169]: I0421 10:09:27.629277 3169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:01.206434 systemd[1]: Started sshd@7-10.0.0.5:22-20.229.252.112:44484.service - OpenSSH per-connection server daemon (20.229.252.112:44484). Apr 21 10:10:02.119782 sshd[6467]: Accepted publickey for core from 20.229.252.112 port 44484 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:02.121709 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:02.125684 systemd-logind[1682]: New session 10 of user core. Apr 21 10:10:02.132343 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:10:02.829836 sshd[6467]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:02.833385 systemd[1]: sshd@7-10.0.0.5:22-20.229.252.112:44484.service: Deactivated successfully. Apr 21 10:10:02.837758 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:10:02.838414 systemd-logind[1682]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:10:02.839716 systemd-logind[1682]: Removed session 10. Apr 21 10:10:07.989621 systemd[1]: Started sshd@8-10.0.0.5:22-20.229.252.112:52916.service - OpenSSH per-connection server daemon (20.229.252.112:52916). Apr 21 10:10:08.914189 sshd[6503]: Accepted publickey for core from 20.229.252.112 port 52916 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:08.916176 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:08.920978 systemd-logind[1682]: New session 11 of user core. Apr 21 10:10:08.926353 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:10:09.608328 sshd[6503]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:09.612249 systemd[1]: sshd@8-10.0.0.5:22-20.229.252.112:52916.service: Deactivated successfully. Apr 21 10:10:09.614280 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:10:09.615115 systemd-logind[1682]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:10:09.615924 systemd-logind[1682]: Removed session 11. Apr 21 10:10:14.767809 systemd[1]: Started sshd@9-10.0.0.5:22-20.229.252.112:52930.service - OpenSSH per-connection server daemon (20.229.252.112:52930). Apr 21 10:10:15.644642 sshd[6547]: Accepted publickey for core from 20.229.252.112 port 52930 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:15.645929 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:15.650154 systemd-logind[1682]: New session 12 of user core. Apr 21 10:10:15.654339 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:10:16.360578 sshd[6547]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:16.364085 systemd[1]: sshd@9-10.0.0.5:22-20.229.252.112:52930.service: Deactivated successfully. Apr 21 10:10:16.365936 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:10:16.366609 systemd-logind[1682]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:10:16.367609 systemd-logind[1682]: Removed session 12. Apr 21 10:10:21.508137 systemd[1]: Started sshd@10-10.0.0.5:22-20.229.252.112:50816.service - OpenSSH per-connection server daemon (20.229.252.112:50816). Apr 21 10:10:22.396818 sshd[6566]: Accepted publickey for core from 20.229.252.112 port 50816 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:22.398890 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:22.405324 systemd-logind[1682]: New session 13 of user core. Apr 21 10:10:22.410409 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:10:23.073435 sshd[6566]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:23.076593 systemd[1]: sshd@10-10.0.0.5:22-20.229.252.112:50816.service: Deactivated successfully. Apr 21 10:10:23.078827 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:10:23.079488 systemd-logind[1682]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:10:23.080306 systemd-logind[1682]: Removed session 13. Apr 21 10:10:28.236446 systemd[1]: Started sshd@11-10.0.0.5:22-20.229.252.112:52684.service - OpenSSH per-connection server daemon (20.229.252.112:52684). Apr 21 10:10:29.133756 sshd[6659]: Accepted publickey for core from 20.229.252.112 port 52684 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:29.135475 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:29.139444 systemd-logind[1682]: New session 14 of user core. Apr 21 10:10:29.146504 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:10:29.830583 sshd[6659]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:29.834247 systemd[1]: sshd@11-10.0.0.5:22-20.229.252.112:52684.service: Deactivated successfully. Apr 21 10:10:29.836125 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:10:29.836813 systemd-logind[1682]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:10:29.837670 systemd-logind[1682]: Removed session 14. Apr 21 10:10:34.997442 systemd[1]: Started sshd@12-10.0.0.5:22-20.229.252.112:52696.service - OpenSSH per-connection server daemon (20.229.252.112:52696). Apr 21 10:10:35.911953 sshd[6711]: Accepted publickey for core from 20.229.252.112 port 52696 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:35.914102 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:35.919326 systemd-logind[1682]: New session 15 of user core. Apr 21 10:10:35.924350 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:10:36.615736 sshd[6711]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:36.619368 systemd[1]: sshd@12-10.0.0.5:22-20.229.252.112:52696.service: Deactivated successfully. Apr 21 10:10:36.622528 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:10:36.624657 systemd-logind[1682]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:10:36.625541 systemd-logind[1682]: Removed session 15. Apr 21 10:10:36.781653 systemd[1]: Started sshd@13-10.0.0.5:22-20.229.252.112:32790.service - OpenSSH per-connection server daemon (20.229.252.112:32790). Apr 21 10:10:37.691875 sshd[6740]: Accepted publickey for core from 20.229.252.112 port 32790 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:37.693581 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:37.697677 systemd-logind[1682]: New session 16 of user core. Apr 21 10:10:37.701336 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:10:38.444709 sshd[6740]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:38.448690 systemd-logind[1682]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:10:38.450348 systemd[1]: sshd@13-10.0.0.5:22-20.229.252.112:32790.service: Deactivated successfully. Apr 21 10:10:38.454672 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:10:38.458114 systemd-logind[1682]: Removed session 16. Apr 21 10:10:38.587434 systemd[1]: Started sshd@14-10.0.0.5:22-20.229.252.112:32800.service - OpenSSH per-connection server daemon (20.229.252.112:32800). Apr 21 10:10:39.500116 sshd[6751]: Accepted publickey for core from 20.229.252.112 port 32800 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:39.501724 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:39.505954 systemd-logind[1682]: New session 17 of user core. Apr 21 10:10:39.508347 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:10:40.205164 sshd[6751]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:40.208659 systemd[1]: sshd@14-10.0.0.5:22-20.229.252.112:32800.service: Deactivated successfully. Apr 21 10:10:40.210677 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:10:40.212733 systemd-logind[1682]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:10:40.213887 systemd-logind[1682]: Removed session 17. Apr 21 10:10:45.363437 systemd[1]: Started sshd@15-10.0.0.5:22-20.229.252.112:42056.service - OpenSSH per-connection server daemon (20.229.252.112:42056). Apr 21 10:10:46.275247 sshd[6766]: Accepted publickey for core from 20.229.252.112 port 42056 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:46.276094 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:46.280110 systemd-logind[1682]: New session 18 of user core. Apr 21 10:10:46.286333 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:10:46.967781 sshd[6766]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:46.970920 systemd[1]: sshd@15-10.0.0.5:22-20.229.252.112:42056.service: Deactivated successfully. Apr 21 10:10:46.973425 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:10:46.975696 systemd-logind[1682]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:10:46.977502 systemd-logind[1682]: Removed session 18. Apr 21 10:10:47.129186 systemd[1]: Started sshd@16-10.0.0.5:22-20.229.252.112:42062.service - OpenSSH per-connection server daemon (20.229.252.112:42062). Apr 21 10:10:48.044720 sshd[6778]: Accepted publickey for core from 20.229.252.112 port 42062 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:48.045581 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:48.049107 systemd-logind[1682]: New session 19 of user core. Apr 21 10:10:48.060348 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:10:48.865936 sshd[6778]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:48.868617 systemd-logind[1682]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:10:48.869543 systemd[1]: sshd@16-10.0.0.5:22-20.229.252.112:42062.service: Deactivated successfully. Apr 21 10:10:48.871683 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:10:48.873159 systemd-logind[1682]: Removed session 19. Apr 21 10:10:49.024471 systemd[1]: Started sshd@17-10.0.0.5:22-20.229.252.112:42074.service - OpenSSH per-connection server daemon (20.229.252.112:42074). Apr 21 10:10:49.922977 sshd[6788]: Accepted publickey for core from 20.229.252.112 port 42074 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:49.923871 sshd[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:49.927991 systemd-logind[1682]: New session 20 of user core. Apr 21 10:10:49.935346 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:10:51.027596 sshd[6788]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:51.032389 systemd-logind[1682]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:10:51.032667 systemd[1]: sshd@17-10.0.0.5:22-20.229.252.112:42074.service: Deactivated successfully. Apr 21 10:10:51.034498 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:10:51.035517 systemd-logind[1682]: Removed session 20. Apr 21 10:10:51.178798 systemd[1]: Started sshd@18-10.0.0.5:22-20.229.252.112:42080.service - OpenSSH per-connection server daemon (20.229.252.112:42080). Apr 21 10:10:52.047367 sshd[6820]: Accepted publickey for core from 20.229.252.112 port 42080 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:52.048782 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:52.052769 systemd-logind[1682]: New session 21 of user core. Apr 21 10:10:52.056332 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:10:52.850715 sshd[6820]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:52.853158 systemd[1]: sshd@18-10.0.0.5:22-20.229.252.112:42080.service: Deactivated successfully. Apr 21 10:10:52.855286 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:10:52.856940 systemd-logind[1682]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:10:52.857835 systemd-logind[1682]: Removed session 21. Apr 21 10:10:52.989357 systemd[1]: Started sshd@19-10.0.0.5:22-20.229.252.112:42090.service - OpenSSH per-connection server daemon (20.229.252.112:42090). Apr 21 10:10:53.874578 sshd[6854]: Accepted publickey for core from 20.229.252.112 port 42090 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:10:53.875809 sshd[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:10:53.879733 systemd-logind[1682]: New session 22 of user core. Apr 21 10:10:53.887333 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:10:54.548419 sshd[6854]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:54.552270 systemd-logind[1682]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:10:54.552938 systemd[1]: sshd@19-10.0.0.5:22-20.229.252.112:42090.service: Deactivated successfully. Apr 21 10:10:54.554750 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:10:54.556036 systemd-logind[1682]: Removed session 22. Apr 21 10:10:59.706066 systemd[1]: Started sshd@20-10.0.0.5:22-20.229.252.112:54596.service - OpenSSH per-connection server daemon (20.229.252.112:54596). Apr 21 10:11:00.620089 sshd[6889]: Accepted publickey for core from 20.229.252.112 port 54596 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:11:00.621631 sshd[6889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:00.625773 systemd-logind[1682]: New session 23 of user core. Apr 21 10:11:00.633371 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:11:01.315463 sshd[6889]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:01.318948 systemd[1]: sshd@20-10.0.0.5:22-20.229.252.112:54596.service: Deactivated successfully. Apr 21 10:11:01.321994 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:11:01.323401 systemd-logind[1682]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:11:01.324184 systemd-logind[1682]: Removed session 23. Apr 21 10:11:06.480737 systemd[1]: Started sshd@21-10.0.0.5:22-20.229.252.112:33770.service - OpenSSH per-connection server daemon (20.229.252.112:33770). Apr 21 10:11:07.392769 sshd[6919]: Accepted publickey for core from 20.229.252.112 port 33770 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:11:07.394184 sshd[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:07.398691 systemd-logind[1682]: New session 24 of user core. Apr 21 10:11:07.408360 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:11:08.087613 sshd[6919]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:08.091650 systemd[1]: sshd@21-10.0.0.5:22-20.229.252.112:33770.service: Deactivated successfully. Apr 21 10:11:08.095151 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:11:08.096555 systemd-logind[1682]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:11:08.097750 systemd-logind[1682]: Removed session 24. Apr 21 10:11:10.054428 systemd[1]: Started sshd@22-10.0.0.5:22-147.185.132.129:59586.service - OpenSSH per-connection server daemon (147.185.132.129:59586). Apr 21 10:11:13.247443 systemd[1]: Started sshd@23-10.0.0.5:22-20.229.252.112:33772.service - OpenSSH per-connection server daemon (20.229.252.112:33772). Apr 21 10:11:14.154920 sshd[6936]: Accepted publickey for core from 20.229.252.112 port 33772 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:11:14.155894 sshd[6936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:14.159451 systemd-logind[1682]: New session 25 of user core. Apr 21 10:11:14.164327 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 10:11:14.848487 sshd[6936]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:14.851607 systemd[1]: sshd@23-10.0.0.5:22-20.229.252.112:33772.service: Deactivated successfully. Apr 21 10:11:14.853193 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 10:11:14.855084 systemd-logind[1682]: Session 25 logged out. Waiting for processes to exit. Apr 21 10:11:14.855977 systemd-logind[1682]: Removed session 25. Apr 21 10:11:15.527691 sshd[6933]: Connection reset by 147.185.132.129 port 59586 [preauth] Apr 21 10:11:15.529405 systemd[1]: sshd@22-10.0.0.5:22-147.185.132.129:59586.service: Deactivated successfully. Apr 21 10:11:19.999899 systemd[1]: Started sshd@24-10.0.0.5:22-20.229.252.112:44934.service - OpenSSH per-connection server daemon (20.229.252.112:44934). Apr 21 10:11:20.888057 sshd[6973]: Accepted publickey for core from 20.229.252.112 port 44934 ssh2: RSA SHA256:9b2y2XEo/Fh+bejj//+dDN49Rsf2EYXwSd4OKZiwlgM Apr 21 10:11:20.890030 sshd[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:20.893919 systemd-logind[1682]: New session 26 of user core. Apr 21 10:11:20.900440 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 10:11:21.562905 sshd[6973]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:21.566212 systemd-logind[1682]: Session 26 logged out. Waiting for processes to exit. Apr 21 10:11:21.566907 systemd[1]: sshd@24-10.0.0.5:22-20.229.252.112:44934.service: Deactivated successfully. Apr 21 10:11:21.568933 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 10:11:21.570186 systemd-logind[1682]: Removed session 26.