Apr 13 19:26:06.221055 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:26:06.221077 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:26:06.221086 kernel: KASLR enabled Apr 13 19:26:06.221091 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 13 19:26:06.221098 kernel: printk: bootconsole [pl11] enabled Apr 13 19:26:06.221104 kernel: efi: EFI v2.7 by EDK II Apr 13 19:26:06.221111 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 13 19:26:06.221117 kernel: random: crng init done Apr 13 19:26:06.221124 kernel: ACPI: Early table checksum verification disabled Apr 13 19:26:06.221129 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 13 19:26:06.221136 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221142 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221149 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 13 19:26:06.221155 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221163 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221169 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221176 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221184 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221190 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221197 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 13 19:26:06.221203 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221210 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 13 19:26:06.221216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 13 19:26:06.221222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 13 19:26:06.221229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 13 19:26:06.221235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 13 19:26:06.221242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 13 19:26:06.221252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 13 19:26:06.221261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 13 19:26:06.221267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 13 19:26:06.221274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 13 19:26:06.221280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 13 19:26:06.221286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 13 19:26:06.221293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 13 19:26:06.221299 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 13 19:26:06.221306 kernel: Zone ranges: Apr 13 19:26:06.221312 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 13 19:26:06.221318 kernel: DMA32 empty Apr 13 19:26:06.221325 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:26:06.221331 kernel: Movable zone start for each node Apr 13 19:26:06.221342 kernel: Early memory node ranges Apr 13 19:26:06.221349 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 13 19:26:06.221355 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 13 19:26:06.221362 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 13 19:26:06.221369 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 13 19:26:06.221377 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 13 19:26:06.221384 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 13 19:26:06.221391 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:26:06.221398 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 13 19:26:06.221405 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 13 19:26:06.221412 kernel: psci: probing for conduit method from ACPI. Apr 13 19:26:06.221418 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:26:06.221425 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:26:06.221432 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 13 19:26:06.221439 kernel: psci: SMC Calling Convention v1.4 Apr 13 19:26:06.223496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 13 19:26:06.223507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 13 19:26:06.223521 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:26:06.223528 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:26:06.223535 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:26:06.223542 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:26:06.223549 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:26:06.223556 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:26:06.223563 kernel: CPU features: detected: Spectre-BHB Apr 13 19:26:06.223570 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:26:06.223577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:26:06.223584 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:26:06.223591 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 13 19:26:06.223600 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:26:06.223607 kernel: alternatives: applying boot alternatives Apr 13 19:26:06.223616 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:26:06.223623 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:26:06.223630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:26:06.223637 kernel: Fallback order for Node 0: 0 Apr 13 19:26:06.223644 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 13 19:26:06.223651 kernel: Policy zone: Normal Apr 13 19:26:06.223658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:26:06.223665 kernel: software IO TLB: area num 2. Apr 13 19:26:06.223672 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 13 19:26:06.223681 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 13 19:26:06.223688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:26:06.223695 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:26:06.223702 kernel: rcu: RCU event tracing is enabled. Apr 13 19:26:06.223709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:26:06.223716 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:26:06.223723 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:26:06.223730 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:26:06.223737 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:26:06.223744 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:26:06.223751 kernel: GICv3: 960 SPIs implemented Apr 13 19:26:06.223760 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:26:06.223766 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:26:06.223773 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 13 19:26:06.223781 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 13 19:26:06.223788 kernel: ITS: No ITS available, not enabling LPIs Apr 13 19:26:06.223795 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:26:06.223802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:06.223809 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:26:06.223816 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:26:06.223823 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:26:06.223830 kernel: Console: colour dummy device 80x25 Apr 13 19:26:06.223839 kernel: printk: console [tty1] enabled Apr 13 19:26:06.223846 kernel: ACPI: Core revision 20230628 Apr 13 19:26:06.223854 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:26:06.223861 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:26:06.223868 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:26:06.223876 kernel: landlock: Up and running. Apr 13 19:26:06.223883 kernel: SELinux: Initializing. Apr 13 19:26:06.223890 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.223897 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.223906 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:06.223914 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:06.223921 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 13 19:26:06.223928 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 13 19:26:06.223935 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 13 19:26:06.223942 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:26:06.223950 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:26:06.223957 kernel: Remapping and enabling EFI services. Apr 13 19:26:06.223971 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:26:06.223979 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:26:06.223986 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 13 19:26:06.223994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:06.224003 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:26:06.224010 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:26:06.224018 kernel: SMP: Total of 2 processors activated. Apr 13 19:26:06.224026 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:26:06.224034 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 13 19:26:06.224043 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:26:06.224051 kernel: CPU features: detected: CRC32 instructions Apr 13 19:26:06.224058 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:26:06.224066 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:26:06.224074 kernel: CPU features: detected: Privileged Access Never Apr 13 19:26:06.224081 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:26:06.224089 kernel: alternatives: applying system-wide alternatives Apr 13 19:26:06.224096 kernel: devtmpfs: initialized Apr 13 19:26:06.224104 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:26:06.224113 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:26:06.224121 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:26:06.224128 kernel: SMBIOS 3.1.0 present. Apr 13 19:26:06.224136 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 13 19:26:06.224144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:26:06.224151 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:26:06.224159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:26:06.224167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:26:06.224175 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:26:06.224184 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 13 19:26:06.224205 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:26:06.224213 kernel: cpuidle: using governor menu Apr 13 19:26:06.224221 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:26:06.224229 kernel: ASID allocator initialised with 32768 entries Apr 13 19:26:06.224237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:26:06.224244 kernel: Serial: AMBA PL011 UART driver Apr 13 19:26:06.224252 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:26:06.224260 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:26:06.224269 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:26:06.224277 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:26:06.224284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:26:06.224293 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:26:06.224300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:26:06.224308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:26:06.224316 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:26:06.224323 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:26:06.224331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:26:06.224340 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:26:06.224348 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:26:06.224355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:26:06.224363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:26:06.224370 kernel: ACPI: Interpreter enabled Apr 13 19:26:06.224378 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:26:06.224385 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:26:06.224393 kernel: printk: console [ttyAMA0] enabled Apr 13 19:26:06.224401 kernel: printk: bootconsole [pl11] disabled Apr 13 19:26:06.224410 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 13 19:26:06.224418 kernel: iommu: Default domain type: Translated Apr 13 19:26:06.224425 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:26:06.224433 kernel: efivars: Registered efivars operations Apr 13 19:26:06.224440 kernel: vgaarb: loaded Apr 13 19:26:06.224455 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:26:06.224463 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:26:06.224470 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:26:06.224477 kernel: pnp: PnP ACPI init Apr 13 19:26:06.224487 kernel: pnp: PnP ACPI: found 0 devices Apr 13 19:26:06.224494 kernel: NET: Registered PF_INET protocol family Apr 13 19:26:06.224502 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:26:06.224510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:26:06.224518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:26:06.224526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:26:06.224533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:26:06.224541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:26:06.224548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.224557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.224565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:26:06.224572 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:26:06.224579 kernel: kvm [1]: HYP mode not available Apr 13 19:26:06.224587 kernel: Initialise system trusted keyrings Apr 13 19:26:06.224595 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:26:06.224602 kernel: Key type asymmetric registered Apr 13 19:26:06.224610 kernel: Asymmetric key parser 'x509' registered Apr 13 19:26:06.224617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:26:06.224626 kernel: io scheduler mq-deadline registered Apr 13 19:26:06.224635 kernel: io scheduler kyber registered Apr 13 19:26:06.224642 kernel: io scheduler bfq registered Apr 13 19:26:06.224650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:26:06.224657 kernel: thunder_xcv, ver 1.0 Apr 13 19:26:06.224665 kernel: thunder_bgx, ver 1.0 Apr 13 19:26:06.224672 kernel: nicpf, ver 1.0 Apr 13 19:26:06.224680 kernel: nicvf, ver 1.0 Apr 13 19:26:06.224831 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:26:06.224909 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:26:05 UTC (1776108365) Apr 13 19:26:06.224919 kernel: efifb: probing for efifb Apr 13 19:26:06.224927 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 13 19:26:06.224934 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 13 19:26:06.224942 kernel: efifb: scrolling: redraw Apr 13 19:26:06.224950 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 19:26:06.224957 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:26:06.224965 kernel: fb0: EFI VGA frame buffer device Apr 13 19:26:06.224974 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 13 19:26:06.224982 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:26:06.224989 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 13 19:26:06.224997 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:26:06.225004 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:26:06.225012 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:26:06.225019 kernel: Segment Routing with IPv6 Apr 13 19:26:06.225026 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:26:06.225034 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:26:06.225043 kernel: Key type dns_resolver registered Apr 13 19:26:06.225051 kernel: registered taskstats version 1 Apr 13 19:26:06.225058 kernel: Loading compiled-in X.509 certificates Apr 13 19:26:06.225066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:26:06.225073 kernel: Key type .fscrypt registered Apr 13 19:26:06.225081 kernel: Key type fscrypt-provisioning registered Apr 13 19:26:06.225088 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:26:06.225095 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:26:06.225103 kernel: ima: No architecture policies found Apr 13 19:26:06.225112 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:26:06.225120 kernel: clk: Disabling unused clocks Apr 13 19:26:06.225127 kernel: Freeing unused kernel memory: 39424K Apr 13 19:26:06.225135 kernel: Run /init as init process Apr 13 19:26:06.225142 kernel: with arguments: Apr 13 19:26:06.225149 kernel: /init Apr 13 19:26:06.225156 kernel: with environment: Apr 13 19:26:06.225164 kernel: HOME=/ Apr 13 19:26:06.225171 kernel: TERM=linux Apr 13 19:26:06.225181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:26:06.225193 systemd[1]: Detected virtualization microsoft. Apr 13 19:26:06.225201 systemd[1]: Detected architecture arm64. Apr 13 19:26:06.225208 systemd[1]: Running in initrd. Apr 13 19:26:06.225216 systemd[1]: No hostname configured, using default hostname. Apr 13 19:26:06.225224 systemd[1]: Hostname set to . Apr 13 19:26:06.225232 systemd[1]: Initializing machine ID from random generator. Apr 13 19:26:06.225242 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:26:06.225251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:06.225259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:06.225267 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:26:06.225276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:26:06.225284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:26:06.225292 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:26:06.225301 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:26:06.225311 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:26:06.225320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:06.225328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:06.225336 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:26:06.225344 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:26:06.225352 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:26:06.225360 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:26:06.225368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:26:06.225378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:26:06.225386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:26:06.225394 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:26:06.225402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:06.225410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:06.225418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:06.225426 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:26:06.225435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:26:06.227463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:26:06.227477 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:26:06.227485 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:26:06.227494 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:26:06.227538 systemd-journald[217]: Collecting audit messages is disabled. Apr 13 19:26:06.227563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:26:06.227572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:06.227581 systemd-journald[217]: Journal started Apr 13 19:26:06.227601 systemd-journald[217]: Runtime Journal (/run/log/journal/498494aa7fba497fb7f58c8aa2a24883) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:26:06.231241 systemd-modules-load[218]: Inserted module 'overlay' Apr 13 19:26:06.256623 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:26:06.256679 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:26:06.264100 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 13 19:26:06.271587 kernel: Bridge firewalling registered Apr 13 19:26:06.264815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:26:06.276710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:06.285959 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:26:06.294131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:06.301818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:06.318744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:06.328504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:26:06.349979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:26:06.362718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:26:06.378385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:06.389425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:26:06.398053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:06.403710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:06.424707 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:26:06.436644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:26:06.450632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:26:06.462349 dracut-cmdline[250]: dracut-dracut-053 Apr 13 19:26:06.462349 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:26:06.502616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:26:06.514586 systemd-resolved[257]: Positive Trust Anchors: Apr 13 19:26:06.514596 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:26:06.514627 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:26:06.516805 systemd-resolved[257]: Defaulting to hostname 'linux'. Apr 13 19:26:06.517726 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:26:06.525479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:26:06.620484 kernel: SCSI subsystem initialized Apr 13 19:26:06.627452 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:26:06.637522 kernel: iscsi: registered transport (tcp) Apr 13 19:26:06.653769 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:26:06.653835 kernel: QLogic iSCSI HBA Driver Apr 13 19:26:06.692265 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:26:06.707895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:26:06.736340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:26:06.736384 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:26:06.741289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:26:06.789092 kernel: raid6: neonx8 gen() 15791 MB/s Apr 13 19:26:06.806451 kernel: raid6: neonx4 gen() 15112 MB/s Apr 13 19:26:06.826448 kernel: raid6: neonx2 gen() 13261 MB/s Apr 13 19:26:06.846467 kernel: raid6: neonx1 gen() 10480 MB/s Apr 13 19:26:06.865454 kernel: raid6: int64x8 gen() 6978 MB/s Apr 13 19:26:06.884464 kernel: raid6: int64x4 gen() 7363 MB/s Apr 13 19:26:06.904453 kernel: raid6: int64x2 gen() 6146 MB/s Apr 13 19:26:06.926568 kernel: raid6: int64x1 gen() 5072 MB/s Apr 13 19:26:06.926627 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Apr 13 19:26:06.949672 kernel: raid6: .... xor() 12015 MB/s, rmw enabled Apr 13 19:26:06.949686 kernel: raid6: using neon recovery algorithm Apr 13 19:26:06.960586 kernel: xor: measuring software checksum speed Apr 13 19:26:06.960600 kernel: 8regs : 19773 MB/sec Apr 13 19:26:06.963528 kernel: 32regs : 19688 MB/sec Apr 13 19:26:06.966402 kernel: arm64_neon : 27132 MB/sec Apr 13 19:26:06.969786 kernel: xor: using function: arm64_neon (27132 MB/sec) Apr 13 19:26:07.019461 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:26:07.030849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:26:07.046613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:26:07.066193 systemd-udevd[439]: Using default interface naming scheme 'v255'. Apr 13 19:26:07.070504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:26:07.091636 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:26:07.107672 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 13 19:26:07.134733 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:26:07.150761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:26:07.192349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:26:07.210637 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:26:07.240152 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:26:07.252025 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:26:07.263331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:26:07.274542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:26:07.295460 kernel: hv_vmbus: Vmbus version:5.3 Apr 13 19:26:07.297586 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:26:07.310969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:26:07.315792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:07.358130 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:07.381607 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 13 19:26:07.381628 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 13 19:26:07.381639 kernel: hv_vmbus: registering driver hv_netvsc Apr 13 19:26:07.381648 kernel: PTP clock support registered Apr 13 19:26:07.368778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:07.398627 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 13 19:26:07.398646 kernel: hv_vmbus: registering driver hid_hyperv Apr 13 19:26:07.368998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:07.390908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.803045 kernel: hv_utils: Registering HyperV Utility Driver Apr 13 19:26:07.803068 kernel: hv_vmbus: registering driver hv_utils Apr 13 19:26:07.803078 kernel: hv_utils: Heartbeat IC version 3.0 Apr 13 19:26:07.803087 kernel: hv_utils: Shutdown IC version 3.2 Apr 13 19:26:07.803097 kernel: hv_vmbus: registering driver hv_storvsc Apr 13 19:26:07.803106 kernel: hv_utils: TimeSync IC version 4.0 Apr 13 19:26:07.803115 kernel: scsi host0: storvsc_host_t Apr 13 19:26:07.803149 kernel: scsi host1: storvsc_host_t Apr 13 19:26:07.414793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.863971 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 13 19:26:07.864141 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 13 19:26:07.864236 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 13 19:26:07.864248 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 13 19:26:07.864266 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 13 19:26:07.864350 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 13 19:26:07.864436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:26:07.864446 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 13 19:26:07.799643 systemd-resolved[257]: Clock change detected. Flushing caches. Apr 13 19:26:07.824313 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:26:07.883709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:07.888488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:07.909920 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: VF slot 1 added Apr 13 19:26:07.911095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.958524 kernel: hv_vmbus: registering driver hv_pci Apr 13 19:26:07.958549 kernel: hv_pci af44e602-7cd7-4cbe-91f1-cb1494aec6bd: PCI VMBus probing: Using version 0x10004 Apr 13 19:26:07.958712 kernel: hv_pci af44e602-7cd7-4cbe-91f1-cb1494aec6bd: PCI host bridge to bus 7cd7:00 Apr 13 19:26:07.958796 kernel: pci_bus 7cd7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 13 19:26:07.958896 kernel: pci_bus 7cd7:00: No busn resource found for root bus, will use [bus 00-ff] Apr 13 19:26:07.958974 kernel: pci 7cd7:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 13 19:26:07.958997 kernel: pci 7cd7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:26:07.959012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:26:07.964700 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 13 19:26:07.964868 kernel: pci 7cd7:00:02.0: enabling Extended Tags Apr 13 19:26:07.971303 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 13 19:26:07.979610 kernel: pci 7cd7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7cd7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 13 19:26:07.992059 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 19:26:07.992269 kernel: pci_bus 7cd7:00: busn_res: [bus 00-ff] end is updated to 00 Apr 13 19:26:08.004108 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 13 19:26:08.004297 kernel: pci 7cd7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:26:08.015610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:08.029523 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 13 19:26:08.031627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:26:08.031762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.042046 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 19:26:08.042763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:08.074439 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:08.103526 kernel: mlx5_core 7cd7:00:02.0: enabling device (0000 -> 0002) Apr 13 19:26:08.109599 kernel: mlx5_core 7cd7:00:02.0: firmware version: 16.30.5026 Apr 13 19:26:08.304282 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: VF registering: eth1 Apr 13 19:26:08.304532 kernel: mlx5_core 7cd7:00:02.0 eth1: joined to eth0 Apr 13 19:26:08.312689 kernel: mlx5_core 7cd7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 13 19:26:08.321603 kernel: mlx5_core 7cd7:00:02.0 enP31959s1: renamed from eth1 Apr 13 19:26:08.803736 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 13 19:26:08.836049 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (498) Apr 13 19:26:08.853554 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (493) Apr 13 19:26:08.857736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:26:08.876282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 13 19:26:08.889704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 13 19:26:08.913718 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 13 19:26:08.926808 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:26:08.959610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.968600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.978603 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:09.979686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:09.980713 disk-uuid[611]: The operation has completed successfully. Apr 13 19:26:10.051034 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:26:10.051128 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:26:10.076740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:26:10.087665 sh[724]: Success Apr 13 19:26:10.121611 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:26:10.431802 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:26:10.446151 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:26:10.454666 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:26:10.487623 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:26:10.487693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:10.493798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:26:10.498130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:26:10.501793 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:26:10.980848 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:26:10.985425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:26:11.001860 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:26:11.012034 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:26:11.044918 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:11.044976 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:11.048746 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:11.121362 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:26:11.136726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:26:11.161327 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:11.165184 systemd-networkd[898]: lo: Link UP Apr 13 19:26:11.165196 systemd-networkd[898]: lo: Gained carrier Apr 13 19:26:11.166753 systemd-networkd[898]: Enumeration completed Apr 13 19:26:11.190070 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:11.167476 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:11.167479 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:26:11.168648 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:26:11.187017 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:26:11.187331 systemd[1]: Reached target network.target - Network. Apr 13 19:26:11.224325 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:26:11.235784 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:26:11.249978 kernel: mlx5_core 7cd7:00:02.0 enP31959s1: Link up Apr 13 19:26:11.286266 systemd-networkd[898]: enP31959s1: Link UP Apr 13 19:26:11.290732 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: Data path switched to VF: enP31959s1 Apr 13 19:26:11.286345 systemd-networkd[898]: eth0: Link UP Apr 13 19:26:11.286464 systemd-networkd[898]: eth0: Gained carrier Apr 13 19:26:11.286473 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:11.294160 systemd-networkd[898]: enP31959s1: Gained carrier Apr 13 19:26:11.312622 systemd-networkd[898]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:26:12.415238 ignition[909]: Ignition 2.19.0 Apr 13 19:26:12.415247 ignition[909]: Stage: fetch-offline Apr 13 19:26:12.419326 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:26:12.415283 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.415291 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.415377 ignition[909]: parsed url from cmdline: "" Apr 13 19:26:12.415380 ignition[909]: no config URL provided Apr 13 19:26:12.415384 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:26:12.442733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:26:12.415391 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:26:12.415396 ignition[909]: failed to fetch config: resource requires networking Apr 13 19:26:12.415716 ignition[909]: Ignition finished successfully Apr 13 19:26:12.456571 ignition[917]: Ignition 2.19.0 Apr 13 19:26:12.456578 ignition[917]: Stage: fetch Apr 13 19:26:12.456740 ignition[917]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.456749 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.456830 ignition[917]: parsed url from cmdline: "" Apr 13 19:26:12.456833 ignition[917]: no config URL provided Apr 13 19:26:12.456837 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:26:12.456845 ignition[917]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:26:12.456864 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 13 19:26:12.595541 ignition[917]: GET result: OK Apr 13 19:26:12.595621 ignition[917]: config has been read from IMDS userdata Apr 13 19:26:12.595664 ignition[917]: parsing config with SHA512: 05e5a6082764eb413f1d454ef471910ad0f99fc1182fba101c62c60d27e0c19f479ce17ce7b8eb43724aa5d2b7aa8adfa5d4e8d607415d62dba870876c6aa57b Apr 13 19:26:12.599975 unknown[917]: fetched base config from "system" Apr 13 19:26:12.600468 ignition[917]: fetch: fetch complete Apr 13 19:26:12.599985 unknown[917]: fetched base config from "system" Apr 13 19:26:12.600474 ignition[917]: fetch: fetch passed Apr 13 19:26:12.599990 unknown[917]: fetched user config from "azure" Apr 13 19:26:12.600532 ignition[917]: Ignition finished successfully Apr 13 19:26:12.603801 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:26:12.625735 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:26:12.643523 ignition[923]: Ignition 2.19.0 Apr 13 19:26:12.643534 ignition[923]: Stage: kargs Apr 13 19:26:12.643724 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.651613 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:26:12.643734 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.644764 ignition[923]: kargs: kargs passed Apr 13 19:26:12.644814 ignition[923]: Ignition finished successfully Apr 13 19:26:12.676072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:26:12.701467 ignition[929]: Ignition 2.19.0 Apr 13 19:26:12.701484 ignition[929]: Stage: disks Apr 13 19:26:12.704898 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:26:12.701764 ignition[929]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.711008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:26:12.701785 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.717941 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:26:12.703744 ignition[929]: disks: disks passed Apr 13 19:26:12.727779 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:26:12.703809 ignition[929]: Ignition finished successfully Apr 13 19:26:12.736387 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:26:12.746155 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:26:12.766779 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:26:12.803661 systemd-networkd[898]: eth0: Gained IPv6LL Apr 13 19:26:12.854852 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 13 19:26:12.864497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:26:12.879842 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:26:12.935634 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:26:12.936055 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:26:12.940119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:26:12.984662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:26:13.005599 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Apr 13 19:26:13.016499 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:13.016533 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:13.018695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:26:13.028444 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:13.035602 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:13.037776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:26:13.042931 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:26:13.042961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:26:13.055070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:26:13.068365 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:26:13.087877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:26:13.914480 coreos-metadata[964]: Apr 13 19:26:13.914 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:26:13.921135 coreos-metadata[964]: Apr 13 19:26:13.921 INFO Fetch successful Apr 13 19:26:13.921135 coreos-metadata[964]: Apr 13 19:26:13.921 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:26:13.933656 coreos-metadata[964]: Apr 13 19:26:13.930 INFO Fetch successful Apr 13 19:26:13.950642 coreos-metadata[964]: Apr 13 19:26:13.950 INFO wrote hostname ci-4081.3.7-a-e37b9c2d0c to /sysroot/etc/hostname Apr 13 19:26:13.957857 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:26:14.461011 initrd-setup-root[979]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:26:14.515376 initrd-setup-root[986]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:26:14.543513 initrd-setup-root[993]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:26:14.549071 initrd-setup-root[1000]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:26:15.713183 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:26:15.726748 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:26:15.733087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:26:15.752667 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:15.754787 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:26:15.781566 ignition[1068]: INFO : Ignition 2.19.0 Apr 13 19:26:15.787299 ignition[1068]: INFO : Stage: mount Apr 13 19:26:15.787299 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:15.787299 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:15.787628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:26:15.814942 ignition[1068]: INFO : mount: mount passed Apr 13 19:26:15.814942 ignition[1068]: INFO : Ignition finished successfully Apr 13 19:26:15.798609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:26:15.822804 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:26:15.838549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:26:15.857605 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1080) Apr 13 19:26:15.868722 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:15.868753 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:15.872331 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:15.880666 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:15.880948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:26:15.907292 ignition[1097]: INFO : Ignition 2.19.0 Apr 13 19:26:15.907292 ignition[1097]: INFO : Stage: files Apr 13 19:26:15.914294 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:15.914294 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:15.914294 ignition[1097]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:26:15.933580 ignition[1097]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:26:15.933580 ignition[1097]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:26:16.236077 ignition[1097]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:26:16.241760 ignition[1097]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:26:16.241760 ignition[1097]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:26:16.236829 unknown[1097]: wrote ssh authorized keys file for user: core Apr 13 19:26:16.258319 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:26:16.266606 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:26:16.334234 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:26:16.536490 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:26:16.536490 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 13 19:26:16.847983 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:26:17.190088 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:17.190088 ignition[1097]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:26:17.208360 ignition[1097]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: files passed Apr 13 19:26:17.216372 ignition[1097]: INFO : Ignition finished successfully Apr 13 19:26:17.230044 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:26:17.262875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:26:17.270761 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:26:17.302479 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:26:17.302846 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:26:17.318645 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.318645 initrd-setup-root-after-ignition[1125]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.337377 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.325222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:26:17.331103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:26:17.354775 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:26:17.382416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:26:17.384762 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:26:17.396683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:26:17.401230 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:26:17.409444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:26:17.420821 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:26:17.440640 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:26:17.453864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:26:17.468685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:26:17.478817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:26:17.489120 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:26:17.497128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:26:17.497318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:26:17.511947 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:26:17.520329 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:26:17.528205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:26:17.537181 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:26:17.546909 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:26:17.556284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:26:17.565487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:26:17.571495 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:26:17.581372 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:26:17.591558 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:26:17.599006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:26:17.599181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:26:17.612142 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:17.621082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:17.630360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:26:17.630462 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:17.640103 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:26:17.640261 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:26:17.653543 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:26:17.653704 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:26:17.662241 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:26:17.662381 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:26:17.670163 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:26:17.670295 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:26:17.694883 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:26:17.714696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:26:17.729744 ignition[1150]: INFO : Ignition 2.19.0 Apr 13 19:26:17.729744 ignition[1150]: INFO : Stage: umount Apr 13 19:26:17.729744 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:17.729744 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:17.729744 ignition[1150]: INFO : umount: umount passed Apr 13 19:26:17.729744 ignition[1150]: INFO : Ignition finished successfully Apr 13 19:26:17.724395 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:26:17.724542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:26:17.730902 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:26:17.731046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:26:17.746142 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:26:17.746239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:26:17.755476 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:26:17.755540 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:26:17.760336 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:26:17.760378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:26:17.768583 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:26:17.768674 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:26:17.776559 systemd[1]: Stopped target network.target - Network. Apr 13 19:26:17.788461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:26:17.788529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:26:17.797938 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:26:17.802076 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:26:17.802571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:17.813873 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:26:17.817702 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:26:17.831693 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:26:17.831738 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:26:17.839751 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:26:17.839793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:26:17.848348 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:26:17.848395 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:26:17.857337 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:26:17.857373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:26:17.866446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:26:17.871019 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:26:17.880703 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:26:17.881219 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:26:17.881304 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:26:17.885486 systemd-networkd[898]: eth0: DHCPv6 lease lost Apr 13 19:26:18.077507 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: Data path switched from VF: enP31959s1 Apr 13 19:26:17.892941 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:26:17.895104 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:26:17.903804 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:26:17.905669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:26:17.914707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:26:17.914766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:17.942763 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:26:17.950689 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:26:17.950754 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:26:17.960072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:26:17.960115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:26:17.969386 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:26:17.969421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:17.978495 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:26:17.978536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:17.989213 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:26:18.019351 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:26:18.019507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:26:18.030568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:26:18.030621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:18.040245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:26:18.040284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:18.049720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:26:18.049769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:26:18.073744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:26:18.073807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:26:18.087296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:26:18.087362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:18.121198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:26:18.132692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:26:18.132768 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:26:18.144722 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:26:18.144777 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:18.157305 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:26:18.157351 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:18.171991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:18.172042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:18.182431 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:26:18.182537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:26:18.191838 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:26:18.191932 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:26:18.201707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:26:18.201785 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:26:18.213663 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:26:18.223994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:26:18.224069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:26:18.391060 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 13 19:26:18.244778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:26:18.265663 systemd[1]: Switching root. Apr 13 19:26:18.399268 systemd-journald[217]: Journal stopped Apr 13 19:26:06.221055 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:26:06.221077 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:26:06.221086 kernel: KASLR enabled Apr 13 19:26:06.221091 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 13 19:26:06.221098 kernel: printk: bootconsole [pl11] enabled Apr 13 19:26:06.221104 kernel: efi: EFI v2.7 by EDK II Apr 13 19:26:06.221111 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 13 19:26:06.221117 kernel: random: crng init done Apr 13 19:26:06.221124 kernel: ACPI: Early table checksum verification disabled Apr 13 19:26:06.221129 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 13 19:26:06.221136 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221142 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221149 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 13 19:26:06.221155 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221163 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221169 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221176 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221184 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221190 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221197 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 13 19:26:06.221203 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:26:06.221210 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 13 19:26:06.221216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 13 19:26:06.221222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 13 19:26:06.221229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 13 19:26:06.221235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 13 19:26:06.221242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 13 19:26:06.221252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 13 19:26:06.221261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 13 19:26:06.221267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 13 19:26:06.221274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 13 19:26:06.221280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 13 19:26:06.221286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 13 19:26:06.221293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 13 19:26:06.221299 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 13 19:26:06.221306 kernel: Zone ranges: Apr 13 19:26:06.221312 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 13 19:26:06.221318 kernel: DMA32 empty Apr 13 19:26:06.221325 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:26:06.221331 kernel: Movable zone start for each node Apr 13 19:26:06.221342 kernel: Early memory node ranges Apr 13 19:26:06.221349 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 13 19:26:06.221355 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 13 19:26:06.221362 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 13 19:26:06.221369 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 13 19:26:06.221377 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 13 19:26:06.221384 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 13 19:26:06.221391 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:26:06.221398 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 13 19:26:06.221405 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 13 19:26:06.221412 kernel: psci: probing for conduit method from ACPI. Apr 13 19:26:06.221418 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:26:06.221425 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:26:06.221432 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 13 19:26:06.221439 kernel: psci: SMC Calling Convention v1.4 Apr 13 19:26:06.223496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 13 19:26:06.223507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 13 19:26:06.223521 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:26:06.223528 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:26:06.223535 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:26:06.223542 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:26:06.223549 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:26:06.223556 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:26:06.223563 kernel: CPU features: detected: Spectre-BHB Apr 13 19:26:06.223570 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:26:06.223577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:26:06.223584 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:26:06.223591 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 13 19:26:06.223600 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:26:06.223607 kernel: alternatives: applying boot alternatives Apr 13 19:26:06.223616 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:26:06.223623 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:26:06.223630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:26:06.223637 kernel: Fallback order for Node 0: 0 Apr 13 19:26:06.223644 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 13 19:26:06.223651 kernel: Policy zone: Normal Apr 13 19:26:06.223658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:26:06.223665 kernel: software IO TLB: area num 2. Apr 13 19:26:06.223672 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 13 19:26:06.223681 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 13 19:26:06.223688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:26:06.223695 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:26:06.223702 kernel: rcu: RCU event tracing is enabled. Apr 13 19:26:06.223709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:26:06.223716 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:26:06.223723 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:26:06.223730 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:26:06.223737 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:26:06.223744 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:26:06.223751 kernel: GICv3: 960 SPIs implemented Apr 13 19:26:06.223760 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:26:06.223766 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:26:06.223773 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 13 19:26:06.223781 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 13 19:26:06.223788 kernel: ITS: No ITS available, not enabling LPIs Apr 13 19:26:06.223795 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:26:06.223802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:06.223809 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:26:06.223816 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:26:06.223823 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:26:06.223830 kernel: Console: colour dummy device 80x25 Apr 13 19:26:06.223839 kernel: printk: console [tty1] enabled Apr 13 19:26:06.223846 kernel: ACPI: Core revision 20230628 Apr 13 19:26:06.223854 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:26:06.223861 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:26:06.223868 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:26:06.223876 kernel: landlock: Up and running. Apr 13 19:26:06.223883 kernel: SELinux: Initializing. Apr 13 19:26:06.223890 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.223897 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.223906 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:06.223914 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:26:06.223921 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 13 19:26:06.223928 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 13 19:26:06.223935 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 13 19:26:06.223942 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:26:06.223950 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:26:06.223957 kernel: Remapping and enabling EFI services. Apr 13 19:26:06.223971 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:26:06.223979 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:26:06.223986 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 13 19:26:06.223994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:26:06.224003 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:26:06.224010 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:26:06.224018 kernel: SMP: Total of 2 processors activated. Apr 13 19:26:06.224026 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:26:06.224034 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 13 19:26:06.224043 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:26:06.224051 kernel: CPU features: detected: CRC32 instructions Apr 13 19:26:06.224058 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:26:06.224066 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:26:06.224074 kernel: CPU features: detected: Privileged Access Never Apr 13 19:26:06.224081 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:26:06.224089 kernel: alternatives: applying system-wide alternatives Apr 13 19:26:06.224096 kernel: devtmpfs: initialized Apr 13 19:26:06.224104 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:26:06.224113 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:26:06.224121 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:26:06.224128 kernel: SMBIOS 3.1.0 present. Apr 13 19:26:06.224136 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 13 19:26:06.224144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:26:06.224151 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:26:06.224159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:26:06.224167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:26:06.224175 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:26:06.224184 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 13 19:26:06.224205 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:26:06.224213 kernel: cpuidle: using governor menu Apr 13 19:26:06.224221 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:26:06.224229 kernel: ASID allocator initialised with 32768 entries Apr 13 19:26:06.224237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:26:06.224244 kernel: Serial: AMBA PL011 UART driver Apr 13 19:26:06.224252 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:26:06.224260 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:26:06.224269 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:26:06.224277 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:26:06.224284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:26:06.224293 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:26:06.224300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:26:06.224308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:26:06.224316 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:26:06.224323 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:26:06.224331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:26:06.224340 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:26:06.224348 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:26:06.224355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:26:06.224363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:26:06.224370 kernel: ACPI: Interpreter enabled Apr 13 19:26:06.224378 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:26:06.224385 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:26:06.224393 kernel: printk: console [ttyAMA0] enabled Apr 13 19:26:06.224401 kernel: printk: bootconsole [pl11] disabled Apr 13 19:26:06.224410 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 13 19:26:06.224418 kernel: iommu: Default domain type: Translated Apr 13 19:26:06.224425 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:26:06.224433 kernel: efivars: Registered efivars operations Apr 13 19:26:06.224440 kernel: vgaarb: loaded Apr 13 19:26:06.224455 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:26:06.224463 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:26:06.224470 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:26:06.224477 kernel: pnp: PnP ACPI init Apr 13 19:26:06.224487 kernel: pnp: PnP ACPI: found 0 devices Apr 13 19:26:06.224494 kernel: NET: Registered PF_INET protocol family Apr 13 19:26:06.224502 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:26:06.224510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:26:06.224518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:26:06.224526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:26:06.224533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:26:06.224541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:26:06.224548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.224557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:26:06.224565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:26:06.224572 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:26:06.224579 kernel: kvm [1]: HYP mode not available Apr 13 19:26:06.224587 kernel: Initialise system trusted keyrings Apr 13 19:26:06.224595 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:26:06.224602 kernel: Key type asymmetric registered Apr 13 19:26:06.224610 kernel: Asymmetric key parser 'x509' registered Apr 13 19:26:06.224617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:26:06.224626 kernel: io scheduler mq-deadline registered Apr 13 19:26:06.224635 kernel: io scheduler kyber registered Apr 13 19:26:06.224642 kernel: io scheduler bfq registered Apr 13 19:26:06.224650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:26:06.224657 kernel: thunder_xcv, ver 1.0 Apr 13 19:26:06.224665 kernel: thunder_bgx, ver 1.0 Apr 13 19:26:06.224672 kernel: nicpf, ver 1.0 Apr 13 19:26:06.224680 kernel: nicvf, ver 1.0 Apr 13 19:26:06.224831 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:26:06.224909 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:26:05 UTC (1776108365) Apr 13 19:26:06.224919 kernel: efifb: probing for efifb Apr 13 19:26:06.224927 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 13 19:26:06.224934 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 13 19:26:06.224942 kernel: efifb: scrolling: redraw Apr 13 19:26:06.224950 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 19:26:06.224957 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:26:06.224965 kernel: fb0: EFI VGA frame buffer device Apr 13 19:26:06.224974 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 13 19:26:06.224982 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:26:06.224989 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 13 19:26:06.224997 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:26:06.225004 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:26:06.225012 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:26:06.225019 kernel: Segment Routing with IPv6 Apr 13 19:26:06.225026 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:26:06.225034 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:26:06.225043 kernel: Key type dns_resolver registered Apr 13 19:26:06.225051 kernel: registered taskstats version 1 Apr 13 19:26:06.225058 kernel: Loading compiled-in X.509 certificates Apr 13 19:26:06.225066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:26:06.225073 kernel: Key type .fscrypt registered Apr 13 19:26:06.225081 kernel: Key type fscrypt-provisioning registered Apr 13 19:26:06.225088 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:26:06.225095 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:26:06.225103 kernel: ima: No architecture policies found Apr 13 19:26:06.225112 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:26:06.225120 kernel: clk: Disabling unused clocks Apr 13 19:26:06.225127 kernel: Freeing unused kernel memory: 39424K Apr 13 19:26:06.225135 kernel: Run /init as init process Apr 13 19:26:06.225142 kernel: with arguments: Apr 13 19:26:06.225149 kernel: /init Apr 13 19:26:06.225156 kernel: with environment: Apr 13 19:26:06.225164 kernel: HOME=/ Apr 13 19:26:06.225171 kernel: TERM=linux Apr 13 19:26:06.225181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:26:06.225193 systemd[1]: Detected virtualization microsoft. Apr 13 19:26:06.225201 systemd[1]: Detected architecture arm64. Apr 13 19:26:06.225208 systemd[1]: Running in initrd. Apr 13 19:26:06.225216 systemd[1]: No hostname configured, using default hostname. Apr 13 19:26:06.225224 systemd[1]: Hostname set to . Apr 13 19:26:06.225232 systemd[1]: Initializing machine ID from random generator. Apr 13 19:26:06.225242 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:26:06.225251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:06.225259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:06.225267 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:26:06.225276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:26:06.225284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:26:06.225292 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:26:06.225301 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:26:06.225311 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:26:06.225320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:06.225328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:06.225336 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:26:06.225344 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:26:06.225352 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:26:06.225360 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:26:06.225368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:26:06.225378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:26:06.225386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:26:06.225394 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:26:06.225402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:06.225410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:06.225418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:06.225426 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:26:06.225435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:26:06.227463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:26:06.227477 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:26:06.227485 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:26:06.227494 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:26:06.227538 systemd-journald[217]: Collecting audit messages is disabled. Apr 13 19:26:06.227563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:26:06.227572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:06.227581 systemd-journald[217]: Journal started Apr 13 19:26:06.227601 systemd-journald[217]: Runtime Journal (/run/log/journal/498494aa7fba497fb7f58c8aa2a24883) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:26:06.231241 systemd-modules-load[218]: Inserted module 'overlay' Apr 13 19:26:06.256623 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:26:06.256679 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:26:06.264100 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 13 19:26:06.271587 kernel: Bridge firewalling registered Apr 13 19:26:06.264815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:26:06.276710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:06.285959 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:26:06.294131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:06.301818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:06.318744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:06.328504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:26:06.349979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:26:06.362718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:26:06.378385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:06.389425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:26:06.398053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:06.403710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:06.424707 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:26:06.436644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:26:06.450632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:26:06.462349 dracut-cmdline[250]: dracut-dracut-053 Apr 13 19:26:06.462349 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:26:06.502616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:26:06.514586 systemd-resolved[257]: Positive Trust Anchors: Apr 13 19:26:06.514596 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:26:06.514627 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:26:06.516805 systemd-resolved[257]: Defaulting to hostname 'linux'. Apr 13 19:26:06.517726 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:26:06.525479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:26:06.620484 kernel: SCSI subsystem initialized Apr 13 19:26:06.627452 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:26:06.637522 kernel: iscsi: registered transport (tcp) Apr 13 19:26:06.653769 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:26:06.653835 kernel: QLogic iSCSI HBA Driver Apr 13 19:26:06.692265 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:26:06.707895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:26:06.736340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:26:06.736384 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:26:06.741289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:26:06.789092 kernel: raid6: neonx8 gen() 15791 MB/s Apr 13 19:26:06.806451 kernel: raid6: neonx4 gen() 15112 MB/s Apr 13 19:26:06.826448 kernel: raid6: neonx2 gen() 13261 MB/s Apr 13 19:26:06.846467 kernel: raid6: neonx1 gen() 10480 MB/s Apr 13 19:26:06.865454 kernel: raid6: int64x8 gen() 6978 MB/s Apr 13 19:26:06.884464 kernel: raid6: int64x4 gen() 7363 MB/s Apr 13 19:26:06.904453 kernel: raid6: int64x2 gen() 6146 MB/s Apr 13 19:26:06.926568 kernel: raid6: int64x1 gen() 5072 MB/s Apr 13 19:26:06.926627 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Apr 13 19:26:06.949672 kernel: raid6: .... xor() 12015 MB/s, rmw enabled Apr 13 19:26:06.949686 kernel: raid6: using neon recovery algorithm Apr 13 19:26:06.960586 kernel: xor: measuring software checksum speed Apr 13 19:26:06.960600 kernel: 8regs : 19773 MB/sec Apr 13 19:26:06.963528 kernel: 32regs : 19688 MB/sec Apr 13 19:26:06.966402 kernel: arm64_neon : 27132 MB/sec Apr 13 19:26:06.969786 kernel: xor: using function: arm64_neon (27132 MB/sec) Apr 13 19:26:07.019461 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:26:07.030849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:26:07.046613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:26:07.066193 systemd-udevd[439]: Using default interface naming scheme 'v255'. Apr 13 19:26:07.070504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:26:07.091636 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:26:07.107672 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 13 19:26:07.134733 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:26:07.150761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:26:07.192349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:26:07.210637 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:26:07.240152 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:26:07.252025 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:26:07.263331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:26:07.274542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:26:07.295460 kernel: hv_vmbus: Vmbus version:5.3 Apr 13 19:26:07.297586 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:26:07.310969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:26:07.315792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:07.358130 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:07.381607 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 13 19:26:07.381628 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 13 19:26:07.381639 kernel: hv_vmbus: registering driver hv_netvsc Apr 13 19:26:07.381648 kernel: PTP clock support registered Apr 13 19:26:07.368778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:07.398627 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 13 19:26:07.398646 kernel: hv_vmbus: registering driver hid_hyperv Apr 13 19:26:07.368998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:07.390908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.803045 kernel: hv_utils: Registering HyperV Utility Driver Apr 13 19:26:07.803068 kernel: hv_vmbus: registering driver hv_utils Apr 13 19:26:07.803078 kernel: hv_utils: Heartbeat IC version 3.0 Apr 13 19:26:07.803087 kernel: hv_utils: Shutdown IC version 3.2 Apr 13 19:26:07.803097 kernel: hv_vmbus: registering driver hv_storvsc Apr 13 19:26:07.803106 kernel: hv_utils: TimeSync IC version 4.0 Apr 13 19:26:07.803115 kernel: scsi host0: storvsc_host_t Apr 13 19:26:07.803149 kernel: scsi host1: storvsc_host_t Apr 13 19:26:07.414793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.863971 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 13 19:26:07.864141 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 13 19:26:07.864236 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 13 19:26:07.864248 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 13 19:26:07.864266 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 13 19:26:07.864350 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 13 19:26:07.864436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:26:07.864446 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 13 19:26:07.799643 systemd-resolved[257]: Clock change detected. Flushing caches. Apr 13 19:26:07.824313 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:26:07.883709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:07.888488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:07.909920 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: VF slot 1 added Apr 13 19:26:07.911095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:07.958524 kernel: hv_vmbus: registering driver hv_pci Apr 13 19:26:07.958549 kernel: hv_pci af44e602-7cd7-4cbe-91f1-cb1494aec6bd: PCI VMBus probing: Using version 0x10004 Apr 13 19:26:07.958712 kernel: hv_pci af44e602-7cd7-4cbe-91f1-cb1494aec6bd: PCI host bridge to bus 7cd7:00 Apr 13 19:26:07.958796 kernel: pci_bus 7cd7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 13 19:26:07.958896 kernel: pci_bus 7cd7:00: No busn resource found for root bus, will use [bus 00-ff] Apr 13 19:26:07.958974 kernel: pci 7cd7:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 13 19:26:07.958997 kernel: pci 7cd7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:26:07.959012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:26:07.964700 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 13 19:26:07.964868 kernel: pci 7cd7:00:02.0: enabling Extended Tags Apr 13 19:26:07.971303 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 13 19:26:07.979610 kernel: pci 7cd7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7cd7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 13 19:26:07.992059 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 19:26:07.992269 kernel: pci_bus 7cd7:00: busn_res: [bus 00-ff] end is updated to 00 Apr 13 19:26:08.004108 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 13 19:26:08.004297 kernel: pci 7cd7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:26:08.015610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:08.029523 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 13 19:26:08.031627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:26:08.031762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.042046 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 19:26:08.042763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:26:08.074439 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:08.103526 kernel: mlx5_core 7cd7:00:02.0: enabling device (0000 -> 0002) Apr 13 19:26:08.109599 kernel: mlx5_core 7cd7:00:02.0: firmware version: 16.30.5026 Apr 13 19:26:08.304282 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: VF registering: eth1 Apr 13 19:26:08.304532 kernel: mlx5_core 7cd7:00:02.0 eth1: joined to eth0 Apr 13 19:26:08.312689 kernel: mlx5_core 7cd7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 13 19:26:08.321603 kernel: mlx5_core 7cd7:00:02.0 enP31959s1: renamed from eth1 Apr 13 19:26:08.803736 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 13 19:26:08.836049 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (498) Apr 13 19:26:08.853554 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (493) Apr 13 19:26:08.857736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:26:08.876282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 13 19:26:08.889704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 13 19:26:08.913718 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 13 19:26:08.926808 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:26:08.959610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.968600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:08.978603 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:09.979686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:26:09.980713 disk-uuid[611]: The operation has completed successfully. Apr 13 19:26:10.051034 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:26:10.051128 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:26:10.076740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:26:10.087665 sh[724]: Success Apr 13 19:26:10.121611 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:26:10.431802 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:26:10.446151 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:26:10.454666 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:26:10.487623 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:26:10.487693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:10.493798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:26:10.498130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:26:10.501793 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:26:10.980848 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:26:10.985425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:26:11.001860 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:26:11.012034 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:26:11.044918 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:11.044976 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:11.048746 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:11.121362 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:26:11.136726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:26:11.161327 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:11.165184 systemd-networkd[898]: lo: Link UP Apr 13 19:26:11.165196 systemd-networkd[898]: lo: Gained carrier Apr 13 19:26:11.166753 systemd-networkd[898]: Enumeration completed Apr 13 19:26:11.190070 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:11.167476 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:11.167479 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:26:11.168648 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:26:11.187017 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:26:11.187331 systemd[1]: Reached target network.target - Network. Apr 13 19:26:11.224325 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:26:11.235784 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:26:11.249978 kernel: mlx5_core 7cd7:00:02.0 enP31959s1: Link up Apr 13 19:26:11.286266 systemd-networkd[898]: enP31959s1: Link UP Apr 13 19:26:11.290732 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: Data path switched to VF: enP31959s1 Apr 13 19:26:11.286345 systemd-networkd[898]: eth0: Link UP Apr 13 19:26:11.286464 systemd-networkd[898]: eth0: Gained carrier Apr 13 19:26:11.286473 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:11.294160 systemd-networkd[898]: enP31959s1: Gained carrier Apr 13 19:26:11.312622 systemd-networkd[898]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:26:12.415238 ignition[909]: Ignition 2.19.0 Apr 13 19:26:12.415247 ignition[909]: Stage: fetch-offline Apr 13 19:26:12.419326 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:26:12.415283 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.415291 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.415377 ignition[909]: parsed url from cmdline: "" Apr 13 19:26:12.415380 ignition[909]: no config URL provided Apr 13 19:26:12.415384 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:26:12.442733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:26:12.415391 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:26:12.415396 ignition[909]: failed to fetch config: resource requires networking Apr 13 19:26:12.415716 ignition[909]: Ignition finished successfully Apr 13 19:26:12.456571 ignition[917]: Ignition 2.19.0 Apr 13 19:26:12.456578 ignition[917]: Stage: fetch Apr 13 19:26:12.456740 ignition[917]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.456749 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.456830 ignition[917]: parsed url from cmdline: "" Apr 13 19:26:12.456833 ignition[917]: no config URL provided Apr 13 19:26:12.456837 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:26:12.456845 ignition[917]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:26:12.456864 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 13 19:26:12.595541 ignition[917]: GET result: OK Apr 13 19:26:12.595621 ignition[917]: config has been read from IMDS userdata Apr 13 19:26:12.595664 ignition[917]: parsing config with SHA512: 05e5a6082764eb413f1d454ef471910ad0f99fc1182fba101c62c60d27e0c19f479ce17ce7b8eb43724aa5d2b7aa8adfa5d4e8d607415d62dba870876c6aa57b Apr 13 19:26:12.599975 unknown[917]: fetched base config from "system" Apr 13 19:26:12.600468 ignition[917]: fetch: fetch complete Apr 13 19:26:12.599985 unknown[917]: fetched base config from "system" Apr 13 19:26:12.600474 ignition[917]: fetch: fetch passed Apr 13 19:26:12.599990 unknown[917]: fetched user config from "azure" Apr 13 19:26:12.600532 ignition[917]: Ignition finished successfully Apr 13 19:26:12.603801 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:26:12.625735 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:26:12.643523 ignition[923]: Ignition 2.19.0 Apr 13 19:26:12.643534 ignition[923]: Stage: kargs Apr 13 19:26:12.643724 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.651613 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:26:12.643734 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.644764 ignition[923]: kargs: kargs passed Apr 13 19:26:12.644814 ignition[923]: Ignition finished successfully Apr 13 19:26:12.676072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:26:12.701467 ignition[929]: Ignition 2.19.0 Apr 13 19:26:12.701484 ignition[929]: Stage: disks Apr 13 19:26:12.704898 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:26:12.701764 ignition[929]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:12.711008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:26:12.701785 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:12.717941 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:26:12.703744 ignition[929]: disks: disks passed Apr 13 19:26:12.727779 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:26:12.703809 ignition[929]: Ignition finished successfully Apr 13 19:26:12.736387 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:26:12.746155 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:26:12.766779 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:26:12.803661 systemd-networkd[898]: eth0: Gained IPv6LL Apr 13 19:26:12.854852 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 13 19:26:12.864497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:26:12.879842 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:26:12.935634 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:26:12.936055 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:26:12.940119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:26:12.984662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:26:13.005599 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Apr 13 19:26:13.016499 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:13.016533 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:13.018695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:26:13.028444 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:13.035602 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:13.037776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:26:13.042931 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:26:13.042961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:26:13.055070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:26:13.068365 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:26:13.087877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:26:13.914480 coreos-metadata[964]: Apr 13 19:26:13.914 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:26:13.921135 coreos-metadata[964]: Apr 13 19:26:13.921 INFO Fetch successful Apr 13 19:26:13.921135 coreos-metadata[964]: Apr 13 19:26:13.921 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:26:13.933656 coreos-metadata[964]: Apr 13 19:26:13.930 INFO Fetch successful Apr 13 19:26:13.950642 coreos-metadata[964]: Apr 13 19:26:13.950 INFO wrote hostname ci-4081.3.7-a-e37b9c2d0c to /sysroot/etc/hostname Apr 13 19:26:13.957857 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:26:14.461011 initrd-setup-root[979]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:26:14.515376 initrd-setup-root[986]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:26:14.543513 initrd-setup-root[993]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:26:14.549071 initrd-setup-root[1000]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:26:15.713183 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:26:15.726748 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:26:15.733087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:26:15.752667 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:15.754787 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:26:15.781566 ignition[1068]: INFO : Ignition 2.19.0 Apr 13 19:26:15.787299 ignition[1068]: INFO : Stage: mount Apr 13 19:26:15.787299 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:15.787299 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:15.787628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:26:15.814942 ignition[1068]: INFO : mount: mount passed Apr 13 19:26:15.814942 ignition[1068]: INFO : Ignition finished successfully Apr 13 19:26:15.798609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:26:15.822804 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:26:15.838549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:26:15.857605 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1080) Apr 13 19:26:15.868722 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:26:15.868753 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:26:15.872331 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:26:15.880666 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:26:15.880948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:26:15.907292 ignition[1097]: INFO : Ignition 2.19.0 Apr 13 19:26:15.907292 ignition[1097]: INFO : Stage: files Apr 13 19:26:15.914294 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:15.914294 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:15.914294 ignition[1097]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:26:15.933580 ignition[1097]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:26:15.933580 ignition[1097]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:26:16.236077 ignition[1097]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:26:16.241760 ignition[1097]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:26:16.241760 ignition[1097]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:26:16.236829 unknown[1097]: wrote ssh authorized keys file for user: core Apr 13 19:26:16.258319 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:26:16.266606 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:26:16.334234 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:26:16.536490 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:26:16.536490 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:16.551663 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 13 19:26:16.847983 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:26:17.190088 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:26:17.190088 ignition[1097]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:26:17.208360 ignition[1097]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:26:17.216372 ignition[1097]: INFO : files: files passed Apr 13 19:26:17.216372 ignition[1097]: INFO : Ignition finished successfully Apr 13 19:26:17.230044 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:26:17.262875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:26:17.270761 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:26:17.302479 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:26:17.302846 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:26:17.318645 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.318645 initrd-setup-root-after-ignition[1125]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.337377 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:26:17.325222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:26:17.331103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:26:17.354775 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:26:17.382416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:26:17.384762 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:26:17.396683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:26:17.401230 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:26:17.409444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:26:17.420821 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:26:17.440640 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:26:17.453864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:26:17.468685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:26:17.478817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:26:17.489120 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:26:17.497128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:26:17.497318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:26:17.511947 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:26:17.520329 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:26:17.528205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:26:17.537181 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:26:17.546909 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:26:17.556284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:26:17.565487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:26:17.571495 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:26:17.581372 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:26:17.591558 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:26:17.599006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:26:17.599181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:26:17.612142 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:17.621082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:17.630360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:26:17.630462 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:17.640103 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:26:17.640261 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:26:17.653543 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:26:17.653704 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:26:17.662241 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:26:17.662381 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:26:17.670163 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:26:17.670295 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:26:17.694883 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:26:17.714696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:26:17.729744 ignition[1150]: INFO : Ignition 2.19.0 Apr 13 19:26:17.729744 ignition[1150]: INFO : Stage: umount Apr 13 19:26:17.729744 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:26:17.729744 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:26:17.729744 ignition[1150]: INFO : umount: umount passed Apr 13 19:26:17.729744 ignition[1150]: INFO : Ignition finished successfully Apr 13 19:26:17.724395 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:26:17.724542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:26:17.730902 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:26:17.731046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:26:17.746142 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:26:17.746239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:26:17.755476 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:26:17.755540 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:26:17.760336 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:26:17.760378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:26:17.768583 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:26:17.768674 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:26:17.776559 systemd[1]: Stopped target network.target - Network. Apr 13 19:26:17.788461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:26:17.788529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:26:17.797938 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:26:17.802076 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:26:17.802571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:17.813873 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:26:17.817702 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:26:17.831693 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:26:17.831738 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:26:17.839751 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:26:17.839793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:26:17.848348 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:26:17.848395 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:26:17.857337 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:26:17.857373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:26:17.866446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:26:17.871019 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:26:17.880703 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:26:17.881219 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:26:17.881304 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:26:17.885486 systemd-networkd[898]: eth0: DHCPv6 lease lost Apr 13 19:26:18.077507 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: Data path switched from VF: enP31959s1 Apr 13 19:26:17.892941 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:26:17.895104 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:26:17.903804 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:26:17.905669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:26:17.914707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:26:17.914766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:17.942763 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:26:17.950689 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:26:17.950754 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:26:17.960072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:26:17.960115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:26:17.969386 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:26:17.969421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:17.978495 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:26:17.978536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:17.989213 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:26:18.019351 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:26:18.019507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:26:18.030568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:26:18.030621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:18.040245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:26:18.040284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:18.049720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:26:18.049769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:26:18.073744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:26:18.073807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:26:18.087296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:26:18.087362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:26:18.121198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:26:18.132692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:26:18.132768 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:26:18.144722 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:26:18.144777 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:18.157305 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:26:18.157351 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:18.171991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:18.172042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:18.182431 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:26:18.182537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:26:18.191838 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:26:18.191932 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:26:18.201707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:26:18.201785 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:26:18.213663 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:26:18.223994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:26:18.224069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:26:18.391060 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 13 19:26:18.244778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:26:18.265663 systemd[1]: Switching root. Apr 13 19:26:18.399268 systemd-journald[217]: Journal stopped Apr 13 19:26:25.213232 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:26:25.213254 kernel: SELinux: policy capability open_perms=1 Apr 13 19:26:25.213264 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:26:25.213272 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:26:25.213282 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:26:25.213290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:26:25.213299 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:26:25.213307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:26:25.213316 kernel: audit: type=1403 audit(1776108380.224:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:26:25.213326 systemd[1]: Successfully loaded SELinux policy in 227.994ms. Apr 13 19:26:25.213338 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.805ms. Apr 13 19:26:25.213348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:26:25.213357 systemd[1]: Detected virtualization microsoft. Apr 13 19:26:25.213366 systemd[1]: Detected architecture arm64. Apr 13 19:26:25.213375 systemd[1]: Detected first boot. Apr 13 19:26:25.213387 systemd[1]: Hostname set to . Apr 13 19:26:25.213396 systemd[1]: Initializing machine ID from random generator. Apr 13 19:26:25.213405 zram_generator::config[1191]: No configuration found. Apr 13 19:26:25.213415 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:26:25.213424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:26:25.213433 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:26:25.213442 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:26:25.213453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:26:25.213463 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:26:25.213472 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:26:25.213482 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:26:25.213491 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:26:25.213501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:26:25.213510 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:26:25.213521 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:26:25.213531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:26:25.213541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:26:25.213550 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:26:25.213560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:26:25.213569 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:26:25.213578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:26:25.213598 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:26:25.213611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:26:25.213621 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:26:25.213630 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:26:25.213641 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:26:25.213651 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:26:25.213661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:26:25.213670 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:26:25.213680 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:26:25.213690 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:26:25.213700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:26:25.213709 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:26:25.213719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:26:25.213733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:26:25.213743 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:26:25.213755 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:26:25.213765 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:26:25.213775 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:26:25.213784 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:26:25.213794 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:26:25.213803 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:26:25.213813 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:26:25.213824 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:26:25.213834 systemd[1]: Reached target machines.target - Containers. Apr 13 19:26:25.213844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:26:25.213854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:26:25.213864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:26:25.213874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:26:25.213884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:26:25.213893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:26:25.213905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:26:25.213914 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:26:25.213924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:26:25.213934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:26:25.213943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:26:25.213953 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:26:25.213963 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:26:25.213973 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:26:25.213984 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:26:25.213993 kernel: fuse: init (API version 7.39) Apr 13 19:26:25.214002 kernel: loop: module loaded Apr 13 19:26:25.214011 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:26:25.214020 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:26:25.214044 systemd-journald[1287]: Collecting audit messages is disabled. Apr 13 19:26:25.214068 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:26:25.214079 systemd-journald[1287]: Journal started Apr 13 19:26:25.214100 systemd-journald[1287]: Runtime Journal (/run/log/journal/8304e0b43c104fbbab9d136a5737063f) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:26:24.202027 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:26:24.468810 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:26:24.469160 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:26:24.469440 systemd[1]: systemd-journald.service: Consumed 2.536s CPU time. Apr 13 19:26:25.236402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:26:25.249332 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:26:25.249393 systemd[1]: Stopped verity-setup.service. Apr 13 19:26:25.253991 kernel: ACPI: bus type drm_connector registered Apr 13 19:26:25.261629 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:26:25.272216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:26:25.277672 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:26:25.283437 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:26:25.288990 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:26:25.294894 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:26:25.300830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:26:25.306370 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:26:25.314649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:26:25.320997 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:26:25.321136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:26:25.327847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:26:25.327974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:26:25.333525 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:26:25.333733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:26:25.338950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:26:25.339069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:26:25.345034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:26:25.345177 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:26:25.350863 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:26:25.350994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:26:25.356521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:26:25.364149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:26:25.370877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:26:25.377040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:26:25.392755 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:26:25.404667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:26:25.411066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:26:25.416682 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:26:25.416717 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:26:25.422388 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:26:25.429418 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:26:25.436271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:26:25.440798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:26:25.467720 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:26:25.474041 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:26:25.480440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:26:25.481408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:26:25.486739 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:26:25.488770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:26:25.495934 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:26:25.511315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:26:25.519544 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:26:25.535303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:26:25.543235 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:26:25.550257 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:26:25.558800 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:26:25.569560 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:26:25.580623 kernel: loop0: detected capacity change from 0 to 114432 Apr 13 19:26:25.582881 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:26:25.589580 udevadm[1329]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:26:25.615571 systemd-journald[1287]: Time spent on flushing to /var/log/journal/8304e0b43c104fbbab9d136a5737063f is 14.516ms for 903 entries. Apr 13 19:26:25.615571 systemd-journald[1287]: System Journal (/var/log/journal/8304e0b43c104fbbab9d136a5737063f) is 8.0M, max 2.6G, 2.6G free. Apr 13 19:26:25.666727 systemd-journald[1287]: Received client request to flush runtime journal. Apr 13 19:26:25.629770 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:26:25.659918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:26:25.660533 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:26:25.668441 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:26:25.685304 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Apr 13 19:26:25.685324 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Apr 13 19:26:25.690970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:26:25.708941 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:26:25.841659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:26:25.859740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:26:25.873709 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Apr 13 19:26:25.874014 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Apr 13 19:26:25.877932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:26:26.149625 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:26:26.189613 kernel: loop1: detected capacity change from 0 to 31320 Apr 13 19:26:26.358218 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:26:26.367767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:26:26.393528 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Apr 13 19:26:26.663599 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:26:26.718616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:26:26.736645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:26:26.781778 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:26:26.807792 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:26:26.878529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:26:26.878854 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:26:26.906514 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:26:26.951669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:26.962130 kernel: hv_vmbus: registering driver hv_balloon Apr 13 19:26:26.970280 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 13 19:26:26.970334 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 13 19:26:26.971782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:26.971951 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:26.983914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:26.992714 kernel: hv_vmbus: registering driver hyperv_fb Apr 13 19:26:27.004154 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 13 19:26:27.004170 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 13 19:26:27.012893 kernel: Console: switching to colour dummy device 80x25 Apr 13 19:26:27.015659 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:26:27.033228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:26:27.033396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:27.039907 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1359) Apr 13 19:26:27.057923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:26:27.099348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:26:27.106567 systemd-networkd[1363]: lo: Link UP Apr 13 19:26:27.106577 systemd-networkd[1363]: lo: Gained carrier Apr 13 19:26:27.109882 systemd-networkd[1363]: Enumeration completed Apr 13 19:26:27.111906 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:26:27.112757 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:27.112765 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:26:27.120173 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:26:27.129381 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:26:27.168638 kernel: mlx5_core 7cd7:00:02.0 enP31959s1: Link up Apr 13 19:26:27.173639 kernel: loop3: detected capacity change from 0 to 197488 Apr 13 19:26:27.193687 kernel: hv_netvsc 002248b5-e9d6-0022-48b5-e9d6002248b5 eth0: Data path switched to VF: enP31959s1 Apr 13 19:26:27.195185 systemd-networkd[1363]: enP31959s1: Link UP Apr 13 19:26:27.195626 systemd-networkd[1363]: eth0: Link UP Apr 13 19:26:27.195630 systemd-networkd[1363]: eth0: Gained carrier Apr 13 19:26:27.195646 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:27.206366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:26:27.206866 systemd-networkd[1363]: enP31959s1: Gained carrier Apr 13 19:26:27.217636 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:26:27.255611 kernel: loop4: detected capacity change from 0 to 114432 Apr 13 19:26:27.276629 kernel: loop5: detected capacity change from 0 to 31320 Apr 13 19:26:27.289692 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:26:27.303619 kernel: loop7: detected capacity change from 0 to 197488 Apr 13 19:26:27.326624 (sd-merge)[1449]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 13 19:26:27.327098 (sd-merge)[1449]: Merged extensions into '/usr'. Apr 13 19:26:27.330576 systemd[1]: Reloading requested from client PID 1325 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:26:27.330608 systemd[1]: Reloading... Apr 13 19:26:27.396644 zram_generator::config[1480]: No configuration found. Apr 13 19:26:27.531963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:26:27.604618 systemd[1]: Reloading finished in 273 ms. Apr 13 19:26:27.642695 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:26:27.658834 systemd[1]: Starting ensure-sysext.service... Apr 13 19:26:27.665058 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:26:27.673400 systemd[1]: Reloading requested from client PID 1535 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:26:27.673412 systemd[1]: Reloading... Apr 13 19:26:27.693605 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:26:27.694163 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:26:27.694952 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:26:27.695245 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 13 19:26:27.695354 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 13 19:26:27.698597 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:26:27.698806 systemd-tmpfiles[1536]: Skipping /boot Apr 13 19:26:27.707671 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:26:27.707805 systemd-tmpfiles[1536]: Skipping /boot Apr 13 19:26:27.753649 zram_generator::config[1568]: No configuration found. Apr 13 19:26:27.852267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:26:27.926613 systemd[1]: Reloading finished in 252 ms. Apr 13 19:26:27.941272 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:26:27.952079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:26:27.971808 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:26:27.979879 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:26:27.988123 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:26:27.997112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:26:28.008845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:26:28.017862 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:26:28.026525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:26:28.027921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:26:28.042888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:26:28.053905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:26:28.061103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:26:28.062024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:26:28.074267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:26:28.074428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:26:28.080886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:26:28.082757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:26:28.090375 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:26:28.090528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:26:28.105044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:26:28.114878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:26:28.123863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:26:28.125216 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:26:28.135827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:26:28.141991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:26:28.143533 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:26:28.144012 systemd-resolved[1638]: Positive Trust Anchors: Apr 13 19:26:28.144389 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:26:28.144781 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:26:28.154799 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:26:28.162368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:26:28.163646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:26:28.170168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:26:28.170876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:26:28.177490 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:26:28.177695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:26:28.184024 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:26:28.197922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:26:28.203647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:26:28.216135 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:26:28.219831 lvm[1667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:26:28.223846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:26:28.230313 systemd-networkd[1363]: eth0: Gained IPv6LL Apr 13 19:26:28.236828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:26:28.246815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:26:28.254081 augenrules[1660]: No rules Apr 13 19:26:28.256486 systemd-resolved[1638]: Using system hostname 'ci-4081.3.7-a-e37b9c2d0c'. Apr 13 19:26:28.263796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:26:28.268801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:26:28.269062 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:26:28.274433 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:26:28.280517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:26:28.288057 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:26:28.293850 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:26:28.299963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:26:28.300218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:26:28.305691 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:26:28.305922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:26:28.311340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:26:28.311550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:26:28.317940 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:26:28.318159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:26:28.327240 systemd[1]: Finished ensure-sysext.service. Apr 13 19:26:28.335011 systemd[1]: Reached target network.target - Network. Apr 13 19:26:28.340044 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:26:28.346054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:26:28.351368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:26:28.351503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:26:29.237124 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:26:29.243935 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:26:33.238613 ldconfig[1320]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:26:33.255231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:26:33.264793 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:26:33.277872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:26:33.283132 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:26:33.287913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:26:33.294328 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:26:33.300701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:26:33.305757 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:26:33.311741 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:26:33.317415 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:26:33.317450 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:26:33.321672 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:26:33.343902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:26:33.350066 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:26:33.359281 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:26:33.364509 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:26:33.369302 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:26:33.373458 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:26:33.377755 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:26:33.377784 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:26:33.386687 systemd[1]: Starting chronyd.service - NTP client/server... Apr 13 19:26:33.393744 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:26:33.406830 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:26:33.412470 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 13 19:26:33.416855 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:26:33.422705 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:26:33.428369 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:26:33.435983 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:26:33.436031 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 13 19:26:33.439509 chronyd[1701]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 13 19:26:33.442790 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 13 19:26:33.447737 jq[1696]: false Apr 13 19:26:33.448358 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 13 19:26:33.449605 KVP[1698]: KVP starting; pid is:1698 Apr 13 19:26:33.449952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:26:33.458790 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:26:33.465937 chronyd[1701]: Timezone right/UTC failed leap second check, ignoring Apr 13 19:26:33.466439 chronyd[1701]: Loaded seccomp filter (level 2) Apr 13 19:26:33.467884 extend-filesystems[1697]: Found loop4 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found loop5 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found loop6 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found loop7 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda1 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda2 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda3 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found usr Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda4 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda6 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda7 Apr 13 19:26:33.471635 extend-filesystems[1697]: Found sda9 Apr 13 19:26:33.471635 extend-filesystems[1697]: Checking size of /dev/sda9 Apr 13 19:26:33.570511 kernel: hv_utils: KVP IC version 4.0 Apr 13 19:26:33.557568 KVP[1698]: KVP LIC Version: 3.1 Apr 13 19:26:33.475812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:26:33.490712 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:26:33.502856 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:26:33.524256 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:26:33.540070 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:26:33.551203 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:26:33.551677 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:26:33.554866 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:26:33.572750 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:26:33.579846 systemd[1]: Started chronyd.service - NTP client/server. Apr 13 19:26:33.596173 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:26:33.596347 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:26:33.598912 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:26:33.599092 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:26:33.609153 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:26:33.615391 extend-filesystems[1697]: Old size kept for /dev/sda9 Apr 13 19:26:33.615391 extend-filesystems[1697]: Found sr0 Apr 13 19:26:33.644661 jq[1723]: true Apr 13 19:26:33.620321 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:26:33.620479 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:26:33.652116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:26:33.653070 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:26:33.653500 update_engine[1721]: I20260413 19:26:33.653115 1721 main.cc:92] Flatcar Update Engine starting Apr 13 19:26:33.684394 jq[1735]: true Apr 13 19:26:33.699957 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:26:33.703205 systemd-logind[1714]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 19:26:33.704868 systemd-logind[1714]: New seat seat0. Apr 13 19:26:33.708776 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:26:33.717267 dbus-daemon[1693]: [system] SELinux support is enabled Apr 13 19:26:33.717435 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:26:33.732187 update_engine[1721]: I20260413 19:26:33.729543 1721 update_check_scheduler.cc:74] Next update check in 5m46s Apr 13 19:26:33.731470 dbus-daemon[1693]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:26:33.730548 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:26:33.730571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:26:33.738910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:26:33.738934 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:26:33.745973 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:26:33.753104 tar[1733]: linux-arm64/LICENSE Apr 13 19:26:33.832433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1754) Apr 13 19:26:33.832511 tar[1733]: linux-arm64/helm Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.817 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.821 INFO Fetch successful Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.821 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.826 INFO Fetch successful Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.826 INFO Fetching http://168.63.129.16/machine/4c361448-60a1-4417-87bb-2f3587bbed7c/1a647625%2D734b%2D446f%2Da5d2%2Db05509d6c47e.%5Fci%2D4081.3.7%2Da%2De37b9c2d0c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.828 INFO Fetch successful Apr 13 19:26:33.832542 coreos-metadata[1692]: Apr 13 19:26:33.828 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:26:33.840505 coreos-metadata[1692]: Apr 13 19:26:33.838 INFO Fetch successful Apr 13 19:26:33.842470 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:26:33.934385 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:26:33.949936 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:26:33.975009 bash[1805]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:26:33.975895 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:26:33.992664 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 19:26:34.056366 locksmithd[1780]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:26:34.578052 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:26:34.614936 containerd[1736]: time="2026-04-13T19:26:34.614856600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:26:34.628917 tar[1733]: linux-arm64/README.md Apr 13 19:26:34.643929 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:26:34.649842 containerd[1736]: time="2026-04-13T19:26:34.649793480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.652288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:26:34.658620 containerd[1736]: time="2026-04-13T19:26:34.658519600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:26:34.658620 containerd[1736]: time="2026-04-13T19:26:34.658557320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:26:34.658620 containerd[1736]: time="2026-04-13T19:26:34.658574680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.658740040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.658764440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.658838160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.658851200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659019920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659034880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659047800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659057600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659123800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659306200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:26:34.660793 containerd[1736]: time="2026-04-13T19:26:34.659394440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:26:34.661045 containerd[1736]: time="2026-04-13T19:26:34.659407440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:26:34.661045 containerd[1736]: time="2026-04-13T19:26:34.659481240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:26:34.661045 containerd[1736]: time="2026-04-13T19:26:34.659518440Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:26:34.667333 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:26:34.674903 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 13 19:26:34.684680 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:26:34.685643 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.685620720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.685696560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.685714240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.685733640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.685756320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.686889640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687149200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687252280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687270400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687284480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687299200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687312880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687327000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690148 containerd[1736]: time="2026-04-13T19:26:34.687340880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687355120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687368120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687381000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687394440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687414880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687430560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687453760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687469120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687483000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687496680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687509400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687522080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687534280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690425 containerd[1736]: time="2026-04-13T19:26:34.687549960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.687561640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.687574440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.688194840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.688220960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.688246280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.688271920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.688284560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689883160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689924960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689938040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689950960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689960640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689981720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:26:34.690720 containerd[1736]: time="2026-04-13T19:26:34.689993640Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:26:34.690967 containerd[1736]: time="2026-04-13T19:26:34.690008000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:26:34.691626 containerd[1736]: time="2026-04-13T19:26:34.691033960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:26:34.691626 containerd[1736]: time="2026-04-13T19:26:34.691203280Z" level=info msg="Connect containerd service" Apr 13 19:26:34.691626 containerd[1736]: time="2026-04-13T19:26:34.691620120Z" level=info msg="using legacy CRI server" Apr 13 19:26:34.691626 containerd[1736]: time="2026-04-13T19:26:34.691630960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:26:34.691817 containerd[1736]: time="2026-04-13T19:26:34.691743360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.694977000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695306600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695352760Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695395400Z" level=info msg="Start subscribing containerd event" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695448680Z" level=info msg="Start recovering state" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695519360Z" level=info msg="Start event monitor" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695530160Z" level=info msg="Start snapshots syncer" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695538440Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.695544960Z" level=info msg="Start streaming server" Apr 13 19:26:34.698667 containerd[1736]: time="2026-04-13T19:26:34.696433320Z" level=info msg="containerd successfully booted in 0.082322s" Apr 13 19:26:34.699326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:26:34.708748 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:26:34.731002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:26:34.742893 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:26:34.748647 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:26:34.754308 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:26:34.771461 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 13 19:26:34.812107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:26:34.818392 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:26:34.819031 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:26:34.824744 systemd[1]: Startup finished in 615ms (kernel) + 13.904s (initrd) + 14.827s (userspace) = 29.348s. Apr 13 19:26:35.231164 login[1849]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:35.231506 login[1848]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:35.243062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:26:35.247866 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:26:35.250240 systemd-logind[1714]: New session 2 of user core. Apr 13 19:26:35.255495 kubelet[1857]: E0413 19:26:35.254783 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:26:35.255751 systemd-logind[1714]: New session 1 of user core. Apr 13 19:26:35.256760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:26:35.256884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:26:35.283646 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:26:35.292142 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:26:35.295011 (systemd)[1870]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:26:35.472764 systemd[1870]: Queued start job for default target default.target. Apr 13 19:26:35.480495 systemd[1870]: Created slice app.slice - User Application Slice. Apr 13 19:26:35.480525 systemd[1870]: Reached target paths.target - Paths. Apr 13 19:26:35.480537 systemd[1870]: Reached target timers.target - Timers. Apr 13 19:26:35.483785 systemd[1870]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:26:35.492751 systemd[1870]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:26:35.492955 systemd[1870]: Reached target sockets.target - Sockets. Apr 13 19:26:35.492972 systemd[1870]: Reached target basic.target - Basic System. Apr 13 19:26:35.493015 systemd[1870]: Reached target default.target - Main User Target. Apr 13 19:26:35.493042 systemd[1870]: Startup finished in 192ms. Apr 13 19:26:35.493315 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:26:35.501014 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:26:35.501749 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:26:36.812461 waagent[1850]: 2026-04-13T19:26:36.812367Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 13 19:26:36.817331 waagent[1850]: 2026-04-13T19:26:36.817273Z INFO Daemon Daemon OS: flatcar 4081.3.7 Apr 13 19:26:36.820923 waagent[1850]: 2026-04-13T19:26:36.820879Z INFO Daemon Daemon Python: 3.11.9 Apr 13 19:26:36.824394 waagent[1850]: 2026-04-13T19:26:36.824195Z INFO Daemon Daemon Run daemon Apr 13 19:26:36.827287 waagent[1850]: 2026-04-13T19:26:36.827251Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.7' Apr 13 19:26:36.834406 waagent[1850]: 2026-04-13T19:26:36.834357Z INFO Daemon Daemon Using waagent for provisioning Apr 13 19:26:36.838435 waagent[1850]: 2026-04-13T19:26:36.838399Z INFO Daemon Daemon Activate resource disk Apr 13 19:26:36.841979 waagent[1850]: 2026-04-13T19:26:36.841946Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 13 19:26:36.850893 waagent[1850]: 2026-04-13T19:26:36.850851Z INFO Daemon Daemon Found device: None Apr 13 19:26:36.854274 waagent[1850]: 2026-04-13T19:26:36.854240Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 13 19:26:36.860573 waagent[1850]: 2026-04-13T19:26:36.860539Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 13 19:26:36.870829 waagent[1850]: 2026-04-13T19:26:36.870784Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 13 19:26:36.875086 waagent[1850]: 2026-04-13T19:26:36.875049Z INFO Daemon Daemon Running default provisioning handler Apr 13 19:26:36.885783 waagent[1850]: 2026-04-13T19:26:36.885561Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 13 19:26:36.896917 waagent[1850]: 2026-04-13T19:26:36.896854Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 13 19:26:36.904262 waagent[1850]: 2026-04-13T19:26:36.904215Z INFO Daemon Daemon cloud-init is enabled: False Apr 13 19:26:36.908183 waagent[1850]: 2026-04-13T19:26:36.908140Z INFO Daemon Daemon Copying ovf-env.xml Apr 13 19:26:36.968613 waagent[1850]: 2026-04-13T19:26:36.968495Z INFO Daemon Daemon Successfully mounted dvd Apr 13 19:26:36.982999 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 13 19:26:36.985620 waagent[1850]: 2026-04-13T19:26:36.985304Z INFO Daemon Daemon Detect protocol endpoint Apr 13 19:26:36.989703 waagent[1850]: 2026-04-13T19:26:36.989583Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 13 19:26:36.994714 waagent[1850]: 2026-04-13T19:26:36.994666Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 13 19:26:37.000458 waagent[1850]: 2026-04-13T19:26:37.000415Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 13 19:26:37.005177 waagent[1850]: 2026-04-13T19:26:37.005134Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 13 19:26:37.009857 waagent[1850]: 2026-04-13T19:26:37.009814Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 13 19:26:37.062717 waagent[1850]: 2026-04-13T19:26:37.062634Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 13 19:26:37.068223 waagent[1850]: 2026-04-13T19:26:37.068199Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 13 19:26:37.072735 waagent[1850]: 2026-04-13T19:26:37.072703Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 13 19:26:37.284206 waagent[1850]: 2026-04-13T19:26:37.284096Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 13 19:26:37.289910 waagent[1850]: 2026-04-13T19:26:37.289851Z INFO Daemon Daemon Forcing an update of the goal state. Apr 13 19:26:37.298051 waagent[1850]: 2026-04-13T19:26:37.297997Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 13 19:26:37.318332 waagent[1850]: 2026-04-13T19:26:37.318257Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Apr 13 19:26:37.323026 waagent[1850]: 2026-04-13T19:26:37.322981Z INFO Daemon Apr 13 19:26:37.325200 waagent[1850]: 2026-04-13T19:26:37.325163Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ca4a5fd5-8291-4784-ae43-3b08c7415fd5 eTag: 13795949699099640915 source: Fabric] Apr 13 19:26:37.333915 waagent[1850]: 2026-04-13T19:26:37.333874Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 13 19:26:37.339410 waagent[1850]: 2026-04-13T19:26:37.339366Z INFO Daemon Apr 13 19:26:37.341651 waagent[1850]: 2026-04-13T19:26:37.341615Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 13 19:26:37.350413 waagent[1850]: 2026-04-13T19:26:37.350380Z INFO Daemon Daemon Downloading artifacts profile blob Apr 13 19:26:37.504088 waagent[1850]: 2026-04-13T19:26:37.503997Z INFO Daemon Downloaded certificate {'thumbprint': '1F2EFD9766824DD6AEF49AC90853455E522D2711', 'hasPrivateKey': True} Apr 13 19:26:37.513188 waagent[1850]: 2026-04-13T19:26:37.513141Z INFO Daemon Fetch goal state completed Apr 13 19:26:37.561394 waagent[1850]: 2026-04-13T19:26:37.561336Z INFO Daemon Daemon Starting provisioning Apr 13 19:26:37.565422 waagent[1850]: 2026-04-13T19:26:37.565380Z INFO Daemon Daemon Handle ovf-env.xml. Apr 13 19:26:37.569027 waagent[1850]: 2026-04-13T19:26:37.568967Z INFO Daemon Daemon Set hostname [ci-4081.3.7-a-e37b9c2d0c] Apr 13 19:26:37.576185 waagent[1850]: 2026-04-13T19:26:37.576130Z INFO Daemon Daemon Publish hostname [ci-4081.3.7-a-e37b9c2d0c] Apr 13 19:26:37.581318 waagent[1850]: 2026-04-13T19:26:37.581264Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 13 19:26:37.586303 waagent[1850]: 2026-04-13T19:26:37.586259Z INFO Daemon Daemon Primary interface is [eth0] Apr 13 19:26:37.635681 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:26:37.635688 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:26:37.635728 systemd-networkd[1363]: eth0: DHCP lease lost Apr 13 19:26:37.636858 waagent[1850]: 2026-04-13T19:26:37.636786Z INFO Daemon Daemon Create user account if not exists Apr 13 19:26:37.641386 waagent[1850]: 2026-04-13T19:26:37.641344Z INFO Daemon Daemon User core already exists, skip useradd Apr 13 19:26:37.645827 waagent[1850]: 2026-04-13T19:26:37.645789Z INFO Daemon Daemon Configure sudoer Apr 13 19:26:37.647628 systemd-networkd[1363]: eth0: DHCPv6 lease lost Apr 13 19:26:37.649906 waagent[1850]: 2026-04-13T19:26:37.649855Z INFO Daemon Daemon Configure sshd Apr 13 19:26:37.653736 waagent[1850]: 2026-04-13T19:26:37.653693Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 13 19:26:37.664552 waagent[1850]: 2026-04-13T19:26:37.664504Z INFO Daemon Daemon Deploy ssh public key. Apr 13 19:26:37.674643 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:26:38.785610 waagent[1850]: 2026-04-13T19:26:38.781558Z INFO Daemon Daemon Provisioning complete Apr 13 19:26:38.798678 waagent[1850]: 2026-04-13T19:26:38.798633Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 13 19:26:38.805733 waagent[1850]: 2026-04-13T19:26:38.805678Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 13 19:26:38.814404 waagent[1850]: 2026-04-13T19:26:38.814358Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 13 19:26:38.948394 waagent[1921]: 2026-04-13T19:26:38.947737Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 13 19:26:38.948394 waagent[1921]: 2026-04-13T19:26:38.947886Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.7 Apr 13 19:26:38.948394 waagent[1921]: 2026-04-13T19:26:38.947941Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 13 19:26:39.331617 waagent[1921]: 2026-04-13T19:26:39.331110Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.7; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 13 19:26:39.331617 waagent[1921]: 2026-04-13T19:26:39.331354Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:26:39.331617 waagent[1921]: 2026-04-13T19:26:39.331415Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:26:39.339491 waagent[1921]: 2026-04-13T19:26:39.339424Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 13 19:26:39.344950 waagent[1921]: 2026-04-13T19:26:39.344908Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Apr 13 19:26:39.345449 waagent[1921]: 2026-04-13T19:26:39.345410Z INFO ExtHandler Apr 13 19:26:39.345518 waagent[1921]: 2026-04-13T19:26:39.345489Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 68856281-9a71-4fe0-9bc4-8fa55439b981 eTag: 13795949699099640915 source: Fabric] Apr 13 19:26:39.345830 waagent[1921]: 2026-04-13T19:26:39.345791Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 13 19:26:39.359373 waagent[1921]: 2026-04-13T19:26:39.359297Z INFO ExtHandler Apr 13 19:26:39.359464 waagent[1921]: 2026-04-13T19:26:39.359437Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 13 19:26:39.364634 waagent[1921]: 2026-04-13T19:26:39.363909Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 13 19:26:39.522064 waagent[1921]: 2026-04-13T19:26:39.520582Z INFO ExtHandler Downloaded certificate {'thumbprint': '1F2EFD9766824DD6AEF49AC90853455E522D2711', 'hasPrivateKey': True} Apr 13 19:26:39.522064 waagent[1921]: 2026-04-13T19:26:39.521180Z INFO ExtHandler Fetch goal state completed Apr 13 19:26:39.536610 waagent[1921]: 2026-04-13T19:26:39.535178Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1921 Apr 13 19:26:39.536610 waagent[1921]: 2026-04-13T19:26:39.535344Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 13 19:26:39.537161 waagent[1921]: 2026-04-13T19:26:39.537120Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.7', '', 'Flatcar Container Linux by Kinvolk'] Apr 13 19:26:39.537628 waagent[1921]: 2026-04-13T19:26:39.537578Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 13 19:26:39.675014 waagent[1921]: 2026-04-13T19:26:39.674583Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 13 19:26:39.675014 waagent[1921]: 2026-04-13T19:26:39.674802Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 13 19:26:39.680737 waagent[1921]: 2026-04-13T19:26:39.680701Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 13 19:26:39.690423 systemd[1]: Reloading requested from client PID 1934 ('systemctl') (unit waagent.service)... Apr 13 19:26:39.690437 systemd[1]: Reloading... Apr 13 19:26:39.770844 zram_generator::config[1964]: No configuration found. Apr 13 19:26:39.868492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:26:39.942775 systemd[1]: Reloading finished in 252 ms. Apr 13 19:26:39.965837 waagent[1921]: 2026-04-13T19:26:39.964675Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 13 19:26:39.970551 systemd[1]: Reloading requested from client PID 2022 ('systemctl') (unit waagent.service)... Apr 13 19:26:39.970564 systemd[1]: Reloading... Apr 13 19:26:40.058618 zram_generator::config[2062]: No configuration found. Apr 13 19:26:40.140751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:26:40.215140 systemd[1]: Reloading finished in 244 ms. Apr 13 19:26:40.240713 waagent[1921]: 2026-04-13T19:26:40.239932Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 13 19:26:40.240713 waagent[1921]: 2026-04-13T19:26:40.240083Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 13 19:26:40.776297 waagent[1921]: 2026-04-13T19:26:40.776214Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 13 19:26:40.776883 waagent[1921]: 2026-04-13T19:26:40.776837Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 13 19:26:40.777683 waagent[1921]: 2026-04-13T19:26:40.777606Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 13 19:26:40.778072 waagent[1921]: 2026-04-13T19:26:40.777971Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 13 19:26:40.779054 waagent[1921]: 2026-04-13T19:26:40.778284Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:26:40.779054 waagent[1921]: 2026-04-13T19:26:40.778372Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:26:40.779054 waagent[1921]: 2026-04-13T19:26:40.778570Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 13 19:26:40.779054 waagent[1921]: 2026-04-13T19:26:40.778777Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 13 19:26:40.779054 waagent[1921]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 13 19:26:40.779054 waagent[1921]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 13 19:26:40.779054 waagent[1921]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 13 19:26:40.779054 waagent[1921]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:26:40.779054 waagent[1921]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:26:40.779054 waagent[1921]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:26:40.779431 waagent[1921]: 2026-04-13T19:26:40.779380Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:26:40.779600 waagent[1921]: 2026-04-13T19:26:40.779549Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 13 19:26:40.779681 waagent[1921]: 2026-04-13T19:26:40.779631Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 13 19:26:40.780164 waagent[1921]: 2026-04-13T19:26:40.780063Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 13 19:26:40.780234 waagent[1921]: 2026-04-13T19:26:40.780161Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 13 19:26:40.780335 waagent[1921]: 2026-04-13T19:26:40.780292Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:26:40.780909 waagent[1921]: 2026-04-13T19:26:40.780857Z INFO EnvHandler ExtHandler Configure routes Apr 13 19:26:40.781044 waagent[1921]: 2026-04-13T19:26:40.781011Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 13 19:26:40.781712 waagent[1921]: 2026-04-13T19:26:40.781678Z INFO EnvHandler ExtHandler Gateway:None Apr 13 19:26:40.781850 waagent[1921]: 2026-04-13T19:26:40.781818Z INFO EnvHandler ExtHandler Routes:None Apr 13 19:26:40.785826 waagent[1921]: 2026-04-13T19:26:40.785781Z INFO ExtHandler ExtHandler Apr 13 19:26:40.786242 waagent[1921]: 2026-04-13T19:26:40.786181Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d5e5e2bc-75aa-47d0-893e-e73606a57ef1 correlation 86610124-4da6-49af-9bb0-1719d1ae6098 created: 2026-04-13T19:25:29.823822Z] Apr 13 19:26:40.787317 waagent[1921]: 2026-04-13T19:26:40.787211Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 13 19:26:40.788978 waagent[1921]: 2026-04-13T19:26:40.788708Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 13 19:26:40.825313 waagent[1921]: 2026-04-13T19:26:40.825263Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 955A1328-6208-4D47-A04B-07874BB86D0B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 13 19:26:40.920206 waagent[1921]: 2026-04-13T19:26:40.920136Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 13 19:26:40.920206 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.920206 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.920206 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.920206 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.920206 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.920206 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.920206 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 13 19:26:40.920206 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 13 19:26:40.920206 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 13 19:26:40.923608 waagent[1921]: 2026-04-13T19:26:40.923229Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 13 19:26:40.923608 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.923608 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.923608 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.923608 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.923608 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:26:40.923608 waagent[1921]: pkts bytes target prot opt in out source destination Apr 13 19:26:40.923608 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 13 19:26:40.923608 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 13 19:26:40.923608 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 13 19:26:40.923608 waagent[1921]: 2026-04-13T19:26:40.923473Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 13 19:26:40.936993 waagent[1921]: 2026-04-13T19:26:40.936633Z INFO MonitorHandler ExtHandler Network interfaces: Apr 13 19:26:40.936993 waagent[1921]: Executing ['ip', '-a', '-o', 'link']: Apr 13 19:26:40.936993 waagent[1921]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 13 19:26:40.936993 waagent[1921]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:e9:d6 brd ff:ff:ff:ff:ff:ff Apr 13 19:26:40.936993 waagent[1921]: 3: enP31959s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:e9:d6 brd ff:ff:ff:ff:ff:ff\ altname enP31959p0s2 Apr 13 19:26:40.936993 waagent[1921]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 13 19:26:40.936993 waagent[1921]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 13 19:26:40.936993 waagent[1921]: 2: eth0 inet 10.0.0.31/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 13 19:26:40.936993 waagent[1921]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 13 19:26:40.936993 waagent[1921]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 13 19:26:40.936993 waagent[1921]: 2: eth0 inet6 fe80::222:48ff:feb5:e9d6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 13 19:26:45.411308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:26:45.418771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:26:45.532825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:26:45.546875 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:26:45.661233 kubelet[2149]: E0413 19:26:45.661166 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:26:45.664545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:26:45.664809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:26:47.808555 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:26:47.817808 systemd[1]: Started sshd@0-10.0.0.31:22-20.229.252.112:47288.service - OpenSSH per-connection server daemon (20.229.252.112:47288). Apr 13 19:26:48.768119 sshd[2158]: Accepted publickey for core from 20.229.252.112 port 47288 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:48.768925 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:48.773149 systemd-logind[1714]: New session 3 of user core. Apr 13 19:26:48.784855 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:26:49.549152 systemd[1]: Started sshd@1-10.0.0.31:22-20.229.252.112:47294.service - OpenSSH per-connection server daemon (20.229.252.112:47294). Apr 13 19:26:50.459012 sshd[2163]: Accepted publickey for core from 20.229.252.112 port 47294 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:50.459800 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:50.463605 systemd-logind[1714]: New session 4 of user core. Apr 13 19:26:50.472766 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:26:51.092813 sshd[2163]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:51.096368 systemd-logind[1714]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:26:51.096802 systemd[1]: sshd@1-10.0.0.31:22-20.229.252.112:47294.service: Deactivated successfully. Apr 13 19:26:51.098579 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:26:51.099474 systemd-logind[1714]: Removed session 4. Apr 13 19:26:51.250742 systemd[1]: Started sshd@2-10.0.0.31:22-20.229.252.112:47310.service - OpenSSH per-connection server daemon (20.229.252.112:47310). Apr 13 19:26:52.166612 sshd[2170]: Accepted publickey for core from 20.229.252.112 port 47310 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:52.167515 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:52.171103 systemd-logind[1714]: New session 5 of user core. Apr 13 19:26:52.181720 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:26:52.800437 sshd[2170]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:52.803155 systemd-logind[1714]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:26:52.803418 systemd[1]: sshd@2-10.0.0.31:22-20.229.252.112:47310.service: Deactivated successfully. Apr 13 19:26:52.805179 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:26:52.806888 systemd-logind[1714]: Removed session 5. Apr 13 19:26:52.950739 systemd[1]: Started sshd@3-10.0.0.31:22-20.229.252.112:47326.service - OpenSSH per-connection server daemon (20.229.252.112:47326). Apr 13 19:26:53.829614 sshd[2177]: Accepted publickey for core from 20.229.252.112 port 47326 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:53.830414 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:53.833906 systemd-logind[1714]: New session 6 of user core. Apr 13 19:26:53.843720 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:26:54.443303 sshd[2177]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:54.446486 systemd[1]: sshd@3-10.0.0.31:22-20.229.252.112:47326.service: Deactivated successfully. Apr 13 19:26:54.448134 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:26:54.448973 systemd-logind[1714]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:26:54.451851 systemd-logind[1714]: Removed session 6. Apr 13 19:26:54.598359 systemd[1]: Started sshd@4-10.0.0.31:22-20.229.252.112:47342.service - OpenSSH per-connection server daemon (20.229.252.112:47342). Apr 13 19:26:55.485274 sshd[2184]: Accepted publickey for core from 20.229.252.112 port 47342 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:55.487870 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:55.491731 systemd-logind[1714]: New session 7 of user core. Apr 13 19:26:55.497714 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:26:55.911294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:26:55.916759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:26:56.381704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:26:56.386171 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:26:56.417616 kubelet[2196]: E0413 19:26:56.417439 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:26:56.419676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:26:56.419827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:26:56.463776 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:26:56.464046 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:26:56.551226 sudo[2190]: pam_unix(sudo:session): session closed for user root Apr 13 19:26:56.707754 sshd[2184]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:56.711668 systemd[1]: sshd@4-10.0.0.31:22-20.229.252.112:47342.service: Deactivated successfully. Apr 13 19:26:56.713108 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:26:56.713746 systemd-logind[1714]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:26:56.714546 systemd-logind[1714]: Removed session 7. Apr 13 19:26:56.861110 systemd[1]: Started sshd@5-10.0.0.31:22-20.229.252.112:59392.service - OpenSSH per-connection server daemon (20.229.252.112:59392). Apr 13 19:26:57.251619 chronyd[1701]: Selected source PHC0 Apr 13 19:26:57.753959 sshd[2207]: Accepted publickey for core from 20.229.252.112 port 59392 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:57.755297 sshd[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:57.758793 systemd-logind[1714]: New session 8 of user core. Apr 13 19:26:57.768700 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:26:58.229295 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:26:58.229564 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:26:58.232464 sudo[2211]: pam_unix(sudo:session): session closed for user root Apr 13 19:26:58.236872 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:26:58.237129 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:26:58.252908 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:26:58.254176 auditctl[2214]: No rules Apr 13 19:26:58.254614 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:26:58.254773 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:26:58.258949 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:26:58.287751 augenrules[2232]: No rules Apr 13 19:26:58.288621 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:26:58.291087 sudo[2210]: pam_unix(sudo:session): session closed for user root Apr 13 19:26:58.447321 sshd[2207]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:58.450887 systemd-logind[1714]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:26:58.451513 systemd[1]: sshd@5-10.0.0.31:22-20.229.252.112:59392.service: Deactivated successfully. Apr 13 19:26:58.453262 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:26:58.454131 systemd-logind[1714]: Removed session 8. Apr 13 19:26:58.602833 systemd[1]: Started sshd@6-10.0.0.31:22-20.229.252.112:59408.service - OpenSSH per-connection server daemon (20.229.252.112:59408). Apr 13 19:26:59.475661 sshd[2240]: Accepted publickey for core from 20.229.252.112 port 59408 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:26:59.476921 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:59.480631 systemd-logind[1714]: New session 9 of user core. Apr 13 19:26:59.494753 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:26:59.945946 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:26:59.946209 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:27:01.350877 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:27:01.350942 (dockerd)[2259]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:27:02.034220 dockerd[2259]: time="2026-04-13T19:27:02.033818387Z" level=info msg="Starting up" Apr 13 19:27:02.526001 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3694012182-merged.mount: Deactivated successfully. Apr 13 19:27:02.593043 dockerd[2259]: time="2026-04-13T19:27:02.592987598Z" level=info msg="Loading containers: start." Apr 13 19:27:02.833612 kernel: Initializing XFRM netlink socket Apr 13 19:27:03.008819 systemd-networkd[1363]: docker0: Link UP Apr 13 19:27:03.034606 dockerd[2259]: time="2026-04-13T19:27:03.034566677Z" level=info msg="Loading containers: done." Apr 13 19:27:03.059808 dockerd[2259]: time="2026-04-13T19:27:03.059743891Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:27:03.059950 dockerd[2259]: time="2026-04-13T19:27:03.059874131Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:27:03.060010 dockerd[2259]: time="2026-04-13T19:27:03.059989490Z" level=info msg="Daemon has completed initialization" Apr 13 19:27:03.148745 dockerd[2259]: time="2026-04-13T19:27:03.148121459Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:27:03.148625 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:27:03.547137 containerd[1736]: time="2026-04-13T19:27:03.546808691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 19:27:04.603362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373667844.mount: Deactivated successfully. Apr 13 19:27:05.906071 containerd[1736]: time="2026-04-13T19:27:05.906005731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:05.909678 containerd[1736]: time="2026-04-13T19:27:05.909641841Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=24595411" Apr 13 19:27:05.913810 containerd[1736]: time="2026-04-13T19:27:05.913756350Z" level=info msg="ImageCreate event name:\"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:05.925542 containerd[1736]: time="2026-04-13T19:27:05.923876764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:05.925542 containerd[1736]: time="2026-04-13T19:27:05.924984161Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"24592010\" in 2.37813643s" Apr 13 19:27:05.925542 containerd[1736]: time="2026-04-13T19:27:05.925013481Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\"" Apr 13 19:27:05.925979 containerd[1736]: time="2026-04-13T19:27:05.925954278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 19:27:06.661232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:27:06.667737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:06.774166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:06.778148 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:27:06.815427 kubelet[2462]: E0413 19:27:06.815363 2462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:27:06.819037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:27:06.819492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:27:07.579616 containerd[1736]: time="2026-04-13T19:27:07.578700816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:07.584062 containerd[1736]: time="2026-04-13T19:27:07.584025162Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=19064095" Apr 13 19:27:07.587568 containerd[1736]: time="2026-04-13T19:27:07.587520313Z" level=info msg="ImageCreate event name:\"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:07.594475 containerd[1736]: time="2026-04-13T19:27:07.594430774Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"20569814\" in 1.668434776s" Apr 13 19:27:07.594722 containerd[1736]: time="2026-04-13T19:27:07.594631654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\"" Apr 13 19:27:07.594722 containerd[1736]: time="2026-04-13T19:27:07.594606614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:07.596790 containerd[1736]: time="2026-04-13T19:27:07.595283572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 19:27:08.710852 containerd[1736]: time="2026-04-13T19:27:08.709800004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:08.713743 containerd[1736]: time="2026-04-13T19:27:08.713702593Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=13797897" Apr 13 19:27:08.717796 containerd[1736]: time="2026-04-13T19:27:08.717770383Z" level=info msg="ImageCreate event name:\"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:08.723451 containerd[1736]: time="2026-04-13T19:27:08.723409048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:08.724416 containerd[1736]: time="2026-04-13T19:27:08.724382165Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"15303634\" in 1.129061193s" Apr 13 19:27:08.724416 containerd[1736]: time="2026-04-13T19:27:08.724414125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\"" Apr 13 19:27:08.725652 containerd[1736]: time="2026-04-13T19:27:08.725627242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 19:27:10.239640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189380796.mount: Deactivated successfully. Apr 13 19:27:10.490200 containerd[1736]: time="2026-04-13T19:27:10.490137326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:10.495079 containerd[1736]: time="2026-04-13T19:27:10.494852433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=22329585" Apr 13 19:27:10.501094 containerd[1736]: time="2026-04-13T19:27:10.500803258Z" level=info msg="ImageCreate event name:\"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:10.507911 containerd[1736]: time="2026-04-13T19:27:10.507862479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:10.508745 containerd[1736]: time="2026-04-13T19:27:10.508713677Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"22328604\" in 1.782972115s" Apr 13 19:27:10.508745 containerd[1736]: time="2026-04-13T19:27:10.508745357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\"" Apr 13 19:27:10.509389 containerd[1736]: time="2026-04-13T19:27:10.509212716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 19:27:11.413269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460110329.mount: Deactivated successfully. Apr 13 19:27:12.635918 containerd[1736]: time="2026-04-13T19:27:12.635871888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:12.640372 containerd[1736]: time="2026-04-13T19:27:12.640340956Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Apr 13 19:27:12.647690 containerd[1736]: time="2026-04-13T19:27:12.647618577Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:12.654301 containerd[1736]: time="2026-04-13T19:27:12.654257800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:12.655809 containerd[1736]: time="2026-04-13T19:27:12.655335797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.146091081s" Apr 13 19:27:12.655809 containerd[1736]: time="2026-04-13T19:27:12.655367557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 13 19:27:12.655916 containerd[1736]: time="2026-04-13T19:27:12.655824395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:27:13.296095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785386244.mount: Deactivated successfully. Apr 13 19:27:13.318912 containerd[1736]: time="2026-04-13T19:27:13.318857018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:13.323006 containerd[1736]: time="2026-04-13T19:27:13.322770289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 13 19:27:13.327613 containerd[1736]: time="2026-04-13T19:27:13.327418837Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:13.334459 containerd[1736]: time="2026-04-13T19:27:13.334407939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:13.335185 containerd[1736]: time="2026-04-13T19:27:13.335155417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 679.307022ms" Apr 13 19:27:13.335246 containerd[1736]: time="2026-04-13T19:27:13.335185897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:27:13.336086 containerd[1736]: time="2026-04-13T19:27:13.335608056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 19:27:14.737039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152148921.mount: Deactivated successfully. Apr 13 19:27:15.058606 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 13 19:27:15.953568 containerd[1736]: time="2026-04-13T19:27:15.953267715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:15.959042 containerd[1736]: time="2026-04-13T19:27:15.958997780Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21751722" Apr 13 19:27:15.962852 containerd[1736]: time="2026-04-13T19:27:15.962786571Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:15.969763 containerd[1736]: time="2026-04-13T19:27:15.968817516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:15.969763 containerd[1736]: time="2026-04-13T19:27:15.969614434Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.633979018s" Apr 13 19:27:15.969763 containerd[1736]: time="2026-04-13T19:27:15.969641994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 13 19:27:16.911263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:27:16.921781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:17.046725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:17.050501 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:27:17.121222 kubelet[2635]: E0413 19:27:17.118871 2635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:27:17.121954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:27:17.122080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:27:18.388568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:18.394815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:18.421256 systemd[1]: Reloading requested from client PID 2650 ('systemctl') (unit session-9.scope)... Apr 13 19:27:18.421274 systemd[1]: Reloading... Apr 13 19:27:18.534610 zram_generator::config[2690]: No configuration found. Apr 13 19:27:18.633341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:18.710976 systemd[1]: Reloading finished in 289 ms. Apr 13 19:27:18.749824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:18.753230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:18.755016 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:27:18.755309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:18.765962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:18.932321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:18.947856 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:27:19.052238 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:19.494681 update_engine[1721]: I20260413 19:27:19.494609 1721 update_attempter.cc:509] Updating boot flags... Apr 13 19:27:19.615782 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2776) Apr 13 19:27:19.753652 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2775) Apr 13 19:27:19.864627 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2775) Apr 13 19:27:20.207542 kubelet[2759]: I0413 19:27:20.207479 2759 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:27:20.207542 kubelet[2759]: I0413 19:27:20.207532 2759 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:27:20.209077 kubelet[2759]: I0413 19:27:20.209053 2759 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:27:20.209077 kubelet[2759]: I0413 19:27:20.209074 2759 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:27:20.209358 kubelet[2759]: I0413 19:27:20.209343 2759 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:27:20.222030 kubelet[2759]: E0413 19:27:20.221985 2759 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:27:20.222254 kubelet[2759]: I0413 19:27:20.222240 2759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:27:20.225217 kubelet[2759]: E0413 19:27:20.225183 2759 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:27:20.225300 kubelet[2759]: I0413 19:27:20.225229 2759 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:27:20.229397 kubelet[2759]: I0413 19:27:20.229096 2759 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:27:20.230128 kubelet[2759]: I0413 19:27:20.230094 2759 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:27:20.230352 kubelet[2759]: I0413 19:27:20.230202 2759 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-e37b9c2d0c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:27:20.230493 kubelet[2759]: I0413 19:27:20.230480 2759 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:27:20.230546 kubelet[2759]: I0413 19:27:20.230538 2759 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:27:20.230894 kubelet[2759]: I0413 19:27:20.230701 2759 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:27:20.238770 kubelet[2759]: I0413 19:27:20.238427 2759 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:27:20.238770 kubelet[2759]: I0413 19:27:20.238605 2759 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:27:20.238770 kubelet[2759]: I0413 19:27:20.238620 2759 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:27:20.238770 kubelet[2759]: I0413 19:27:20.238635 2759 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:27:20.238770 kubelet[2759]: I0413 19:27:20.238643 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:27:20.242192 kubelet[2759]: I0413 19:27:20.242172 2759 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:27:20.243164 kubelet[2759]: I0413 19:27:20.243143 2759 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:27:20.243297 kubelet[2759]: I0413 19:27:20.243284 2759 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:27:20.243392 kubelet[2759]: W0413 19:27:20.243382 2759 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:27:20.245904 kubelet[2759]: I0413 19:27:20.245889 2759 server.go:1257] "Started kubelet" Apr 13 19:27:20.247454 kubelet[2759]: I0413 19:27:20.247297 2759 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:27:20.253905 kubelet[2759]: I0413 19:27:20.253856 2759 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:27:20.257353 kubelet[2759]: I0413 19:27:20.257283 2759 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:27:20.259899 kubelet[2759]: E0413 19:27:20.256765 2759 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-e37b9c2d0c.18a60137576d6342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-e37b9c2d0c,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-e37b9c2d0c,},FirstTimestamp:2026-04-13 19:27:20.245855042 +0000 UTC m=+1.295399544,LastTimestamp:2026-04-13 19:27:20.245855042 +0000 UTC m=+1.295399544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-e37b9c2d0c,}" Apr 13 19:27:20.259899 kubelet[2759]: I0413 19:27:20.258504 2759 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:27:20.259899 kubelet[2759]: I0413 19:27:20.258553 2759 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:27:20.259899 kubelet[2759]: I0413 19:27:20.258767 2759 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:27:20.259899 kubelet[2759]: I0413 19:27:20.258963 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:27:20.261788 kubelet[2759]: I0413 19:27:20.260350 2759 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:27:20.261788 kubelet[2759]: E0413 19:27:20.260519 2759 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" Apr 13 19:27:20.261788 kubelet[2759]: I0413 19:27:20.260765 2759 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:27:20.261788 kubelet[2759]: I0413 19:27:20.260920 2759 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:27:20.261788 kubelet[2759]: E0413 19:27:20.261001 2759 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-e37b9c2d0c?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Apr 13 19:27:20.263772 kubelet[2759]: I0413 19:27:20.263741 2759 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:27:20.263875 kubelet[2759]: I0413 19:27:20.263865 2759 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:27:20.264001 kubelet[2759]: I0413 19:27:20.263985 2759 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:27:20.268951 kubelet[2759]: E0413 19:27:20.268923 2759 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:27:20.305196 kubelet[2759]: I0413 19:27:20.305149 2759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:27:20.306477 kubelet[2759]: I0413 19:27:20.306439 2759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:27:20.306477 kubelet[2759]: I0413 19:27:20.306470 2759 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:27:20.306670 kubelet[2759]: I0413 19:27:20.306498 2759 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:27:20.306670 kubelet[2759]: E0413 19:27:20.306541 2759 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:27:20.349662 kubelet[2759]: I0413 19:27:20.349640 2759 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:27:20.349939 kubelet[2759]: I0413 19:27:20.349788 2759 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:27:20.349939 kubelet[2759]: I0413 19:27:20.349808 2759 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:27:20.356736 kubelet[2759]: I0413 19:27:20.356515 2759 policy_none.go:50] "Start" Apr 13 19:27:20.356736 kubelet[2759]: I0413 19:27:20.356532 2759 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:27:20.356736 kubelet[2759]: I0413 19:27:20.356543 2759 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:27:20.361216 kubelet[2759]: E0413 19:27:20.361201 2759 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" Apr 13 19:27:20.363610 kubelet[2759]: I0413 19:27:20.363416 2759 policy_none.go:44] "Start" Apr 13 19:27:20.366923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:27:20.375064 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:27:20.378399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:27:20.385486 kubelet[2759]: E0413 19:27:20.385456 2759 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:27:20.385675 kubelet[2759]: I0413 19:27:20.385659 2759 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:27:20.385708 kubelet[2759]: I0413 19:27:20.385674 2759 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:27:20.386924 kubelet[2759]: I0413 19:27:20.386224 2759 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:27:20.387853 kubelet[2759]: E0413 19:27:20.387813 2759 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:27:20.387976 kubelet[2759]: E0413 19:27:20.387860 2759 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.7-a-e37b9c2d0c\" not found" Apr 13 19:27:20.420295 systemd[1]: Created slice kubepods-burstable-podf5211d3867567b222861394eb268f14c.slice - libcontainer container kubepods-burstable-podf5211d3867567b222861394eb268f14c.slice. Apr 13 19:27:20.435280 kubelet[2759]: E0413 19:27:20.435239 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.439827 systemd[1]: Created slice kubepods-burstable-podba81f78060e0988fc2a8550c8a44dc69.slice - libcontainer container kubepods-burstable-podba81f78060e0988fc2a8550c8a44dc69.slice. Apr 13 19:27:20.447686 kubelet[2759]: E0413 19:27:20.447662 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.449492 systemd[1]: Created slice kubepods-burstable-pod212ede7b9c88a4926f5709aff50259c3.slice - libcontainer container kubepods-burstable-pod212ede7b9c88a4926f5709aff50259c3.slice. Apr 13 19:27:20.451266 kubelet[2759]: E0413 19:27:20.451248 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.461711 kubelet[2759]: E0413 19:27:20.461632 2759 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-e37b9c2d0c?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Apr 13 19:27:20.487507 kubelet[2759]: I0413 19:27:20.487140 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.487507 kubelet[2759]: E0413 19:27:20.487471 2759 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.561983 kubelet[2759]: I0413 19:27:20.561962 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562140 kubelet[2759]: I0413 19:27:20.562121 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562219 kubelet[2759]: I0413 19:27:20.562207 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562415 kubelet[2759]: I0413 19:27:20.562283 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562415 kubelet[2759]: I0413 19:27:20.562304 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562415 kubelet[2759]: I0413 19:27:20.562322 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212ede7b9c88a4926f5709aff50259c3-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"212ede7b9c88a4926f5709aff50259c3\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562415 kubelet[2759]: I0413 19:27:20.562336 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562415 kubelet[2759]: I0413 19:27:20.562366 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.562544 kubelet[2759]: I0413 19:27:20.562387 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.689642 kubelet[2759]: I0413 19:27:20.689610 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.689947 kubelet[2759]: E0413 19:27:20.689920 2759 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:20.744901 containerd[1736]: time="2026-04-13T19:27:20.744797187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-e37b9c2d0c,Uid:f5211d3867567b222861394eb268f14c,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:20.755437 containerd[1736]: time="2026-04-13T19:27:20.755404561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c,Uid:ba81f78060e0988fc2a8550c8a44dc69,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:20.762494 containerd[1736]: time="2026-04-13T19:27:20.762466663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-e37b9c2d0c,Uid:212ede7b9c88a4926f5709aff50259c3,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:20.862601 kubelet[2759]: E0413 19:27:20.862543 2759 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-e37b9c2d0c?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Apr 13 19:27:21.092057 kubelet[2759]: I0413 19:27:21.091958 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:21.092303 kubelet[2759]: E0413 19:27:21.092280 2759 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:21.554534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545563307.mount: Deactivated successfully. Apr 13 19:27:21.585642 containerd[1736]: time="2026-04-13T19:27:21.585578278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:21.589276 containerd[1736]: time="2026-04-13T19:27:21.589236548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:27:21.592626 containerd[1736]: time="2026-04-13T19:27:21.592580419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:21.596611 containerd[1736]: time="2026-04-13T19:27:21.596239089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:21.600018 containerd[1736]: time="2026-04-13T19:27:21.599808879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:27:21.603446 containerd[1736]: time="2026-04-13T19:27:21.603410189Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:21.606555 containerd[1736]: time="2026-04-13T19:27:21.606497700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:27:21.612483 containerd[1736]: time="2026-04-13T19:27:21.612434604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:27:21.613609 containerd[1736]: time="2026-04-13T19:27:21.613220762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 857.750441ms" Apr 13 19:27:21.615619 containerd[1736]: time="2026-04-13T19:27:21.615569955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 853.047532ms" Apr 13 19:27:21.616228 containerd[1736]: time="2026-04-13T19:27:21.616201914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 870.974168ms" Apr 13 19:27:21.663235 kubelet[2759]: E0413 19:27:21.663193 2759 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-e37b9c2d0c?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Apr 13 19:27:21.894610 kubelet[2759]: I0413 19:27:21.894153 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:21.894610 kubelet[2759]: E0413 19:27:21.894503 2759 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:22.349398 kubelet[2759]: E0413 19:27:22.349350 2759 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:27:22.362637 containerd[1736]: time="2026-04-13T19:27:22.360167975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:22.362637 containerd[1736]: time="2026-04-13T19:27:22.361688051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:22.362637 containerd[1736]: time="2026-04-13T19:27:22.361711571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.362637 containerd[1736]: time="2026-04-13T19:27:22.361791491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.365609 containerd[1736]: time="2026-04-13T19:27:22.360870893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:22.365609 containerd[1736]: time="2026-04-13T19:27:22.360940373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:22.365609 containerd[1736]: time="2026-04-13T19:27:22.360965333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.365609 containerd[1736]: time="2026-04-13T19:27:22.361063173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.367120 containerd[1736]: time="2026-04-13T19:27:22.366851997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:22.367120 containerd[1736]: time="2026-04-13T19:27:22.366898877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:22.367120 containerd[1736]: time="2026-04-13T19:27:22.366921237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.367120 containerd[1736]: time="2026-04-13T19:27:22.366984957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:22.388769 systemd[1]: Started cri-containerd-344e6a68585d93894183e064dd26c8efe8c90db3522727a5db0b4129807e6277.scope - libcontainer container 344e6a68585d93894183e064dd26c8efe8c90db3522727a5db0b4129807e6277. Apr 13 19:27:22.393319 systemd[1]: Started cri-containerd-7a105db868ae5766010da52b7f8775a9ddd6fe007dc6400fd627a5ea4af16880.scope - libcontainer container 7a105db868ae5766010da52b7f8775a9ddd6fe007dc6400fd627a5ea4af16880. Apr 13 19:27:22.395641 systemd[1]: Started cri-containerd-af35071520c5ab12e7bd5b88a95b39837e05ed5299793e99f14a8ecd28eedb6f.scope - libcontainer container af35071520c5ab12e7bd5b88a95b39837e05ed5299793e99f14a8ecd28eedb6f. Apr 13 19:27:22.432471 containerd[1736]: time="2026-04-13T19:27:22.431552538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c,Uid:ba81f78060e0988fc2a8550c8a44dc69,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a105db868ae5766010da52b7f8775a9ddd6fe007dc6400fd627a5ea4af16880\"" Apr 13 19:27:22.447069 containerd[1736]: time="2026-04-13T19:27:22.447034535Z" level=info msg="CreateContainer within sandbox \"7a105db868ae5766010da52b7f8775a9ddd6fe007dc6400fd627a5ea4af16880\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:27:22.447422 containerd[1736]: time="2026-04-13T19:27:22.447338534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-e37b9c2d0c,Uid:212ede7b9c88a4926f5709aff50259c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"344e6a68585d93894183e064dd26c8efe8c90db3522727a5db0b4129807e6277\"" Apr 13 19:27:22.455053 containerd[1736]: time="2026-04-13T19:27:22.455008873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-e37b9c2d0c,Uid:f5211d3867567b222861394eb268f14c,Namespace:kube-system,Attempt:0,} returns sandbox id \"af35071520c5ab12e7bd5b88a95b39837e05ed5299793e99f14a8ecd28eedb6f\"" Apr 13 19:27:22.457456 containerd[1736]: time="2026-04-13T19:27:22.457260427Z" level=info msg="CreateContainer within sandbox \"344e6a68585d93894183e064dd26c8efe8c90db3522727a5db0b4129807e6277\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:27:22.464184 containerd[1736]: time="2026-04-13T19:27:22.464159608Z" level=info msg="CreateContainer within sandbox \"af35071520c5ab12e7bd5b88a95b39837e05ed5299793e99f14a8ecd28eedb6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:27:22.524096 containerd[1736]: time="2026-04-13T19:27:22.524047122Z" level=info msg="CreateContainer within sandbox \"7a105db868ae5766010da52b7f8775a9ddd6fe007dc6400fd627a5ea4af16880\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb23bead455a32a3664828bbbd3e6e8769de04892c0a8a4c1558f3779c7aa787\"" Apr 13 19:27:22.525388 containerd[1736]: time="2026-04-13T19:27:22.525363638Z" level=info msg="StartContainer for \"eb23bead455a32a3664828bbbd3e6e8769de04892c0a8a4c1558f3779c7aa787\"" Apr 13 19:27:22.542638 containerd[1736]: time="2026-04-13T19:27:22.542604551Z" level=info msg="CreateContainer within sandbox \"344e6a68585d93894183e064dd26c8efe8c90db3522727a5db0b4129807e6277\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3059d85b5f61c83298003285754bb28393416a39f1b096683d0b930dbe7ca97\"" Apr 13 19:27:22.543604 containerd[1736]: time="2026-04-13T19:27:22.543163909Z" level=info msg="StartContainer for \"e3059d85b5f61c83298003285754bb28393416a39f1b096683d0b930dbe7ca97\"" Apr 13 19:27:22.548761 systemd[1]: Started cri-containerd-eb23bead455a32a3664828bbbd3e6e8769de04892c0a8a4c1558f3779c7aa787.scope - libcontainer container eb23bead455a32a3664828bbbd3e6e8769de04892c0a8a4c1558f3779c7aa787. Apr 13 19:27:22.560123 containerd[1736]: time="2026-04-13T19:27:22.560094062Z" level=info msg="CreateContainer within sandbox \"af35071520c5ab12e7bd5b88a95b39837e05ed5299793e99f14a8ecd28eedb6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42fe97ed1bb11b706ec5590c1c2b0a386d160fc51669eca46ed0a52c74a6fac8\"" Apr 13 19:27:22.562620 containerd[1736]: time="2026-04-13T19:27:22.561001020Z" level=info msg="StartContainer for \"42fe97ed1bb11b706ec5590c1c2b0a386d160fc51669eca46ed0a52c74a6fac8\"" Apr 13 19:27:22.592765 systemd[1]: Started cri-containerd-e3059d85b5f61c83298003285754bb28393416a39f1b096683d0b930dbe7ca97.scope - libcontainer container e3059d85b5f61c83298003285754bb28393416a39f1b096683d0b930dbe7ca97. Apr 13 19:27:22.608998 systemd[1]: Started cri-containerd-42fe97ed1bb11b706ec5590c1c2b0a386d160fc51669eca46ed0a52c74a6fac8.scope - libcontainer container 42fe97ed1bb11b706ec5590c1c2b0a386d160fc51669eca46ed0a52c74a6fac8. Apr 13 19:27:22.619336 containerd[1736]: time="2026-04-13T19:27:22.619280899Z" level=info msg="StartContainer for \"eb23bead455a32a3664828bbbd3e6e8769de04892c0a8a4c1558f3779c7aa787\" returns successfully" Apr 13 19:27:22.664372 containerd[1736]: time="2026-04-13T19:27:22.664112855Z" level=info msg="StartContainer for \"e3059d85b5f61c83298003285754bb28393416a39f1b096683d0b930dbe7ca97\" returns successfully" Apr 13 19:27:22.674878 containerd[1736]: time="2026-04-13T19:27:22.674837945Z" level=info msg="StartContainer for \"42fe97ed1bb11b706ec5590c1c2b0a386d160fc51669eca46ed0a52c74a6fac8\" returns successfully" Apr 13 19:27:23.319614 kubelet[2759]: E0413 19:27:23.319158 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:23.322868 kubelet[2759]: E0413 19:27:23.322844 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:23.324476 kubelet[2759]: E0413 19:27:23.324291 2759 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:23.496829 kubelet[2759]: I0413 19:27:23.496804 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:23.882174 kubelet[2759]: E0413 19:27:23.882139 2759 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.7-a-e37b9c2d0c\" not found" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:23.966377 kubelet[2759]: I0413 19:27:23.966341 2759 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.061169 kubelet[2759]: I0413 19:27:24.061126 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.073292 kubelet[2759]: E0413 19:27:24.073251 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.073292 kubelet[2759]: I0413 19:27:24.073281 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.075229 kubelet[2759]: E0413 19:27:24.075202 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.075229 kubelet[2759]: I0413 19:27:24.075225 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.077024 kubelet[2759]: E0413 19:27:24.077001 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.243163 kubelet[2759]: I0413 19:27:24.243115 2759 apiserver.go:52] "Watching apiserver" Apr 13 19:27:24.261291 kubelet[2759]: I0413 19:27:24.261256 2759 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:27:24.324544 kubelet[2759]: I0413 19:27:24.324401 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.324544 kubelet[2759]: I0413 19:27:24.324440 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.326615 kubelet[2759]: E0413 19:27:24.326528 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:24.326779 kubelet[2759]: E0413 19:27:24.326762 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:25.996465 systemd[1]: Reloading requested from client PID 3141 ('systemctl') (unit session-9.scope)... Apr 13 19:27:25.996878 systemd[1]: Reloading... Apr 13 19:27:26.025350 kubelet[2759]: I0413 19:27:26.025323 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.034493 kubelet[2759]: I0413 19:27:26.034461 2759 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:26.084619 zram_generator::config[3178]: No configuration found. Apr 13 19:27:26.212952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:27:26.306305 systemd[1]: Reloading finished in 309 ms. Apr 13 19:27:26.338123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:26.350685 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:27:26.351074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:26.351190 systemd[1]: kubelet.service: Consumed 1.391s CPU time, 125.8M memory peak, 0B memory swap peak. Apr 13 19:27:26.354848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:27:26.462434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:27:26.467030 (kubelet)[3245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:27:26.508030 kubelet[3245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:27:26.516906 kubelet[3245]: I0413 19:27:26.516846 3245 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:27:26.516906 kubelet[3245]: I0413 19:27:26.516894 3245 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:27:26.516906 kubelet[3245]: I0413 19:27:26.516918 3245 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:27:26.517064 kubelet[3245]: I0413 19:27:26.516923 3245 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:27:26.517362 kubelet[3245]: I0413 19:27:26.517344 3245 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:27:26.519410 kubelet[3245]: I0413 19:27:26.519388 3245 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:27:26.521417 kubelet[3245]: I0413 19:27:26.521387 3245 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:27:26.523995 kubelet[3245]: E0413 19:27:26.523903 3245 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:27:26.524084 kubelet[3245]: I0413 19:27:26.524032 3245 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:27:26.526761 kubelet[3245]: I0413 19:27:26.526744 3245 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:27:26.526926 kubelet[3245]: I0413 19:27:26.526900 3245 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:27:26.527064 kubelet[3245]: I0413 19:27:26.526924 3245 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-e37b9c2d0c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:27:26.527148 kubelet[3245]: I0413 19:27:26.527067 3245 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:27:26.527148 kubelet[3245]: I0413 19:27:26.527076 3245 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:27:26.527148 kubelet[3245]: I0413 19:27:26.527095 3245 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:27:26.527272 kubelet[3245]: I0413 19:27:26.527258 3245 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:27:26.527381 kubelet[3245]: I0413 19:27:26.527370 3245 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:27:26.527411 kubelet[3245]: I0413 19:27:26.527384 3245 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:27:26.527411 kubelet[3245]: I0413 19:27:26.527400 3245 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:27:26.527411 kubelet[3245]: I0413 19:27:26.527408 3245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:27:26.535658 kubelet[3245]: I0413 19:27:26.531897 3245 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:27:26.535658 kubelet[3245]: I0413 19:27:26.532683 3245 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:27:26.535658 kubelet[3245]: I0413 19:27:26.532711 3245 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:27:26.537380 kubelet[3245]: I0413 19:27:26.536821 3245 server.go:1257] "Started kubelet" Apr 13 19:27:26.539614 kubelet[3245]: I0413 19:27:26.539582 3245 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:27:26.545074 kubelet[3245]: I0413 19:27:26.544502 3245 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:27:26.546290 kubelet[3245]: I0413 19:27:26.546161 3245 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:27:26.547622 kubelet[3245]: E0413 19:27:26.547582 3245 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:27:26.547868 kubelet[3245]: I0413 19:27:26.547829 3245 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:27:26.547980 kubelet[3245]: I0413 19:27:26.547968 3245 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:27:26.548248 kubelet[3245]: I0413 19:27:26.548193 3245 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:27:26.550576 kubelet[3245]: I0413 19:27:26.550549 3245 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:27:26.554636 kubelet[3245]: I0413 19:27:26.554615 3245 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:27:26.555051 kubelet[3245]: E0413 19:27:26.555029 3245 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-e37b9c2d0c\" not found" Apr 13 19:27:26.570223 kubelet[3245]: I0413 19:27:26.567929 3245 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:27:26.570223 kubelet[3245]: I0413 19:27:26.568050 3245 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:27:26.577126 kubelet[3245]: I0413 19:27:26.576782 3245 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:27:26.581857 kubelet[3245]: I0413 19:27:26.581818 3245 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:27:26.581857 kubelet[3245]: I0413 19:27:26.581848 3245 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:27:26.581857 kubelet[3245]: I0413 19:27:26.581868 3245 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:27:26.582034 kubelet[3245]: E0413 19:27:26.581914 3245 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:27:26.582034 kubelet[3245]: I0413 19:27:26.577637 3245 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:27:26.592683 kubelet[3245]: I0413 19:27:26.592584 3245 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:27:26.593161 kubelet[3245]: I0413 19:27:26.593142 3245 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:27:26.637749 kubelet[3245]: I0413 19:27:26.637727 3245 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:27:26.637890 kubelet[3245]: I0413 19:27:26.637877 3245 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:27:26.638659 kubelet[3245]: I0413 19:27:26.638644 3245 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:27:26.638847 kubelet[3245]: I0413 19:27:26.638833 3245 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 19:27:26.638921 kubelet[3245]: I0413 19:27:26.638899 3245 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 19:27:26.638969 kubelet[3245]: I0413 19:27:26.638962 3245 policy_none.go:50] "Start" Apr 13 19:27:26.639024 kubelet[3245]: I0413 19:27:26.639016 3245 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:27:26.639190 kubelet[3245]: I0413 19:27:26.639129 3245 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:27:26.639508 kubelet[3245]: I0413 19:27:26.639495 3245 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:27:26.639608 kubelet[3245]: I0413 19:27:26.639599 3245 policy_none.go:44] "Start" Apr 13 19:27:26.646381 kubelet[3245]: E0413 19:27:26.646365 3245 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:27:26.646826 kubelet[3245]: I0413 19:27:26.646808 3245 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:27:26.646932 kubelet[3245]: I0413 19:27:26.646903 3245 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:27:26.647185 kubelet[3245]: I0413 19:27:26.647174 3245 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:27:26.648724 kubelet[3245]: E0413 19:27:26.648709 3245 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:27:26.683100 kubelet[3245]: I0413 19:27:26.683071 3245 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.683806 kubelet[3245]: I0413 19:27:26.683787 3245 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.684159 kubelet[3245]: I0413 19:27:26.683949 3245 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.692900 kubelet[3245]: I0413 19:27:26.692408 3245 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:26.698131 kubelet[3245]: I0413 19:27:26.697864 3245 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:26.698131 kubelet[3245]: E0413 19:27:26.697963 3245 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.698271 kubelet[3245]: I0413 19:27:26.698173 3245 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:26.749614 kubelet[3245]: I0413 19:27:26.749352 3245 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.761987 kubelet[3245]: I0413 19:27:26.761947 3245 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.762126 kubelet[3245]: I0413 19:27:26.762036 3245 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.768349 kubelet[3245]: I0413 19:27:26.768246 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.768349 kubelet[3245]: I0413 19:27:26.768279 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.768349 kubelet[3245]: I0413 19:27:26.768295 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5211d3867567b222861394eb268f14c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"f5211d3867567b222861394eb268f14c\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869126 kubelet[3245]: I0413 19:27:26.868770 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869126 kubelet[3245]: I0413 19:27:26.868819 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869126 kubelet[3245]: I0413 19:27:26.868851 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869126 kubelet[3245]: I0413 19:27:26.868867 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869126 kubelet[3245]: I0413 19:27:26.868885 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba81f78060e0988fc2a8550c8a44dc69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"ba81f78060e0988fc2a8550c8a44dc69\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:26.869322 kubelet[3245]: I0413 19:27:26.868902 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212ede7b9c88a4926f5709aff50259c3-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" (UID: \"212ede7b9c88a4926f5709aff50259c3\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:27.528123 kubelet[3245]: I0413 19:27:27.527847 3245 apiserver.go:52] "Watching apiserver" Apr 13 19:27:27.568514 kubelet[3245]: I0413 19:27:27.568479 3245 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:27:27.608774 kubelet[3245]: I0413 19:27:27.608558 3245 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:27.609097 kubelet[3245]: I0413 19:27:27.609012 3245 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:27.624376 kubelet[3245]: I0413 19:27:27.623693 3245 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:27.624376 kubelet[3245]: E0413 19:27:27.623745 3245 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-e37b9c2d0c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:27.624376 kubelet[3245]: I0413 19:27:27.624007 3245 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 19:27:27.624376 kubelet[3245]: E0413 19:27:27.624033 3245 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-e37b9c2d0c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:27:28.655753 kubelet[3245]: I0413 19:27:28.654651 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-e37b9c2d0c" podStartSLOduration=2.654625562 podStartE2EDuration="2.654625562s" podCreationTimestamp="2026-04-13 19:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:28.643292274 +0000 UTC m=+2.172584071" watchObservedRunningTime="2026-04-13 19:27:28.654625562 +0000 UTC m=+2.183917359" Apr 13 19:27:28.677317 kubelet[3245]: I0413 19:27:28.677262 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.7-a-e37b9c2d0c" podStartSLOduration=2.6772499400000003 podStartE2EDuration="2.67724994s" podCreationTimestamp="2026-04-13 19:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:28.657817233 +0000 UTC m=+2.187109030" watchObservedRunningTime="2026-04-13 19:27:28.67724994 +0000 UTC m=+2.206541697" Apr 13 19:27:30.970820 kubelet[3245]: I0413 19:27:30.970635 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.7-a-e37b9c2d0c" podStartSLOduration=4.97062023 podStartE2EDuration="4.97062023s" podCreationTimestamp="2026-04-13 19:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:28.677468859 +0000 UTC m=+2.206760656" watchObservedRunningTime="2026-04-13 19:27:30.97062023 +0000 UTC m=+4.499912027" Apr 13 19:27:31.090632 kubelet[3245]: I0413 19:27:31.090342 3245 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:27:31.090860 containerd[1736]: time="2026-04-13T19:27:31.090815595Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:27:31.091130 kubelet[3245]: I0413 19:27:31.090978 3245 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:27:32.169193 systemd[1]: Created slice kubepods-besteffort-pod63d82b6c_4eb3_4c25_8193_189a4c0df1cb.slice - libcontainer container kubepods-besteffort-pod63d82b6c_4eb3_4c25_8193_189a4c0df1cb.slice. Apr 13 19:27:32.197844 kubelet[3245]: I0413 19:27:32.197807 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63d82b6c-4eb3-4c25-8193-189a4c0df1cb-xtables-lock\") pod \"kube-proxy-cwlm9\" (UID: \"63d82b6c-4eb3-4c25-8193-189a4c0df1cb\") " pod="kube-system/kube-proxy-cwlm9" Apr 13 19:27:32.197844 kubelet[3245]: I0413 19:27:32.197847 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d82b6c-4eb3-4c25-8193-189a4c0df1cb-lib-modules\") pod \"kube-proxy-cwlm9\" (UID: \"63d82b6c-4eb3-4c25-8193-189a4c0df1cb\") " pod="kube-system/kube-proxy-cwlm9" Apr 13 19:27:32.198216 kubelet[3245]: I0413 19:27:32.197870 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfnr\" (UniqueName: \"kubernetes.io/projected/63d82b6c-4eb3-4c25-8193-189a4c0df1cb-kube-api-access-msfnr\") pod \"kube-proxy-cwlm9\" (UID: \"63d82b6c-4eb3-4c25-8193-189a4c0df1cb\") " pod="kube-system/kube-proxy-cwlm9" Apr 13 19:27:32.198216 kubelet[3245]: I0413 19:27:32.197892 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63d82b6c-4eb3-4c25-8193-189a4c0df1cb-kube-proxy\") pod \"kube-proxy-cwlm9\" (UID: \"63d82b6c-4eb3-4c25-8193-189a4c0df1cb\") " pod="kube-system/kube-proxy-cwlm9" Apr 13 19:27:32.305867 systemd[1]: Created slice kubepods-besteffort-poda2d250bb_5929_4f3c_ab6f_e279537b0516.slice - libcontainer container kubepods-besteffort-poda2d250bb_5929_4f3c_ab6f_e279537b0516.slice. Apr 13 19:27:32.399843 kubelet[3245]: I0413 19:27:32.399799 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztz7d\" (UniqueName: \"kubernetes.io/projected/a2d250bb-5929-4f3c-ab6f-e279537b0516-kube-api-access-ztz7d\") pod \"tigera-operator-6cf4cccc57-j72nz\" (UID: \"a2d250bb-5929-4f3c-ab6f-e279537b0516\") " pod="tigera-operator/tigera-operator-6cf4cccc57-j72nz" Apr 13 19:27:32.399980 kubelet[3245]: I0413 19:27:32.399859 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2d250bb-5929-4f3c-ab6f-e279537b0516-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-j72nz\" (UID: \"a2d250bb-5929-4f3c-ab6f-e279537b0516\") " pod="tigera-operator/tigera-operator-6cf4cccc57-j72nz" Apr 13 19:27:32.488610 containerd[1736]: time="2026-04-13T19:27:32.488476752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwlm9,Uid:63d82b6c-4eb3-4c25-8193-189a4c0df1cb,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:32.543493 containerd[1736]: time="2026-04-13T19:27:32.543262267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:32.543493 containerd[1736]: time="2026-04-13T19:27:32.543311027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:32.543493 containerd[1736]: time="2026-04-13T19:27:32.543325747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.543493 containerd[1736]: time="2026-04-13T19:27:32.543401786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.563733 systemd[1]: Started cri-containerd-8afbf4401e7a4ba51ae1c2d5a90830fc2523f005365caade575b222f1f1c5715.scope - libcontainer container 8afbf4401e7a4ba51ae1c2d5a90830fc2523f005365caade575b222f1f1c5715. Apr 13 19:27:32.581547 containerd[1736]: time="2026-04-13T19:27:32.581507739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwlm9,Uid:63d82b6c-4eb3-4c25-8193-189a4c0df1cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8afbf4401e7a4ba51ae1c2d5a90830fc2523f005365caade575b222f1f1c5715\"" Apr 13 19:27:32.597971 containerd[1736]: time="2026-04-13T19:27:32.597926301Z" level=info msg="CreateContainer within sandbox \"8afbf4401e7a4ba51ae1c2d5a90830fc2523f005365caade575b222f1f1c5715\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:27:32.620082 containerd[1736]: time="2026-04-13T19:27:32.620046371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-j72nz,Uid:a2d250bb-5929-4f3c-ab6f-e279537b0516,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:27:32.670861 containerd[1736]: time="2026-04-13T19:27:32.670696535Z" level=info msg="CreateContainer within sandbox \"8afbf4401e7a4ba51ae1c2d5a90830fc2523f005365caade575b222f1f1c5715\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4f6570d9891e8d4bbef3d534101db28130848ef4830883143e0997aadf8142e\"" Apr 13 19:27:32.672604 containerd[1736]: time="2026-04-13T19:27:32.671967532Z" level=info msg="StartContainer for \"b4f6570d9891e8d4bbef3d534101db28130848ef4830883143e0997aadf8142e\"" Apr 13 19:27:32.692748 systemd[1]: Started cri-containerd-b4f6570d9891e8d4bbef3d534101db28130848ef4830883143e0997aadf8142e.scope - libcontainer container b4f6570d9891e8d4bbef3d534101db28130848ef4830883143e0997aadf8142e. Apr 13 19:27:32.714707 containerd[1736]: time="2026-04-13T19:27:32.713980876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:32.714707 containerd[1736]: time="2026-04-13T19:27:32.714048275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:32.714707 containerd[1736]: time="2026-04-13T19:27:32.714059675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.714707 containerd[1736]: time="2026-04-13T19:27:32.714181715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:32.730017 containerd[1736]: time="2026-04-13T19:27:32.728881041Z" level=info msg="StartContainer for \"b4f6570d9891e8d4bbef3d534101db28130848ef4830883143e0997aadf8142e\" returns successfully" Apr 13 19:27:32.735047 systemd[1]: Started cri-containerd-087ad1eddf752f87e772d0dfeef6412f7db5703df19587eb2173edc9bbb15898.scope - libcontainer container 087ad1eddf752f87e772d0dfeef6412f7db5703df19587eb2173edc9bbb15898. Apr 13 19:27:32.768094 containerd[1736]: time="2026-04-13T19:27:32.767982232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-j72nz,Uid:a2d250bb-5929-4f3c-ab6f-e279537b0516,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"087ad1eddf752f87e772d0dfeef6412f7db5703df19587eb2173edc9bbb15898\"" Apr 13 19:27:32.771727 containerd[1736]: time="2026-04-13T19:27:32.771521664Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:27:33.635049 kubelet[3245]: I0413 19:27:33.634984 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-cwlm9" podStartSLOduration=1.634972085 podStartE2EDuration="1.634972085s" podCreationTimestamp="2026-04-13 19:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:33.634815326 +0000 UTC m=+7.164107163" watchObservedRunningTime="2026-04-13 19:27:33.634972085 +0000 UTC m=+7.164263842" Apr 13 19:27:34.260875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237418166.mount: Deactivated successfully. Apr 13 19:27:34.767430 containerd[1736]: time="2026-04-13T19:27:34.767384651Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:34.771639 containerd[1736]: time="2026-04-13T19:27:34.771610521Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:27:34.776476 containerd[1736]: time="2026-04-13T19:27:34.776426750Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:34.782145 containerd[1736]: time="2026-04-13T19:27:34.782093537Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:34.782956 containerd[1736]: time="2026-04-13T19:27:34.782837335Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.011284191s" Apr 13 19:27:34.782956 containerd[1736]: time="2026-04-13T19:27:34.782869575Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:27:34.791694 containerd[1736]: time="2026-04-13T19:27:34.791653635Z" level=info msg="CreateContainer within sandbox \"087ad1eddf752f87e772d0dfeef6412f7db5703df19587eb2173edc9bbb15898\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:27:34.846093 containerd[1736]: time="2026-04-13T19:27:34.846034910Z" level=info msg="CreateContainer within sandbox \"087ad1eddf752f87e772d0dfeef6412f7db5703df19587eb2173edc9bbb15898\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7891a23455a1f6a95343ee4f60b77ad247dfd169b47b10fb6bd8ee6c42565841\"" Apr 13 19:27:34.846678 containerd[1736]: time="2026-04-13T19:27:34.846613029Z" level=info msg="StartContainer for \"7891a23455a1f6a95343ee4f60b77ad247dfd169b47b10fb6bd8ee6c42565841\"" Apr 13 19:27:34.874813 systemd[1]: Started cri-containerd-7891a23455a1f6a95343ee4f60b77ad247dfd169b47b10fb6bd8ee6c42565841.scope - libcontainer container 7891a23455a1f6a95343ee4f60b77ad247dfd169b47b10fb6bd8ee6c42565841. Apr 13 19:27:34.901642 containerd[1736]: time="2026-04-13T19:27:34.901514903Z" level=info msg="StartContainer for \"7891a23455a1f6a95343ee4f60b77ad247dfd169b47b10fb6bd8ee6c42565841\" returns successfully" Apr 13 19:27:35.651605 kubelet[3245]: I0413 19:27:35.651535 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-j72nz" podStartSLOduration=1.638686981 podStartE2EDuration="3.651523089s" podCreationTimestamp="2026-04-13 19:27:32 +0000 UTC" firstStartedPulling="2026-04-13 19:27:32.770795105 +0000 UTC m=+6.300086902" lastFinishedPulling="2026-04-13 19:27:34.783631253 +0000 UTC m=+8.312923010" observedRunningTime="2026-04-13 19:27:35.65102621 +0000 UTC m=+9.180318007" watchObservedRunningTime="2026-04-13 19:27:35.651523089 +0000 UTC m=+9.180814886" Apr 13 19:27:40.809771 sudo[2243]: pam_unix(sudo:session): session closed for user root Apr 13 19:27:40.960801 sshd[2240]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:40.966887 systemd-logind[1714]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:27:40.967570 systemd[1]: sshd@6-10.0.0.31:22-20.229.252.112:59408.service: Deactivated successfully. Apr 13 19:27:40.970700 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:27:40.971104 systemd[1]: session-9.scope: Consumed 4.029s CPU time, 151.6M memory peak, 0B memory swap peak. Apr 13 19:27:40.971911 systemd-logind[1714]: Removed session 9. Apr 13 19:27:48.874101 systemd[1]: Created slice kubepods-besteffort-pod8ec93d19_7756_4a87_b547_96502fce5e09.slice - libcontainer container kubepods-besteffort-pod8ec93d19_7756_4a87_b547_96502fce5e09.slice. Apr 13 19:27:48.895623 kubelet[3245]: I0413 19:27:48.894563 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec93d19-7756-4a87-b547-96502fce5e09-tigera-ca-bundle\") pod \"calico-typha-84cf858875-kbdsp\" (UID: \"8ec93d19-7756-4a87-b547-96502fce5e09\") " pod="calico-system/calico-typha-84cf858875-kbdsp" Apr 13 19:27:48.895623 kubelet[3245]: I0413 19:27:48.894624 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772gp\" (UniqueName: \"kubernetes.io/projected/8ec93d19-7756-4a87-b547-96502fce5e09-kube-api-access-772gp\") pod \"calico-typha-84cf858875-kbdsp\" (UID: \"8ec93d19-7756-4a87-b547-96502fce5e09\") " pod="calico-system/calico-typha-84cf858875-kbdsp" Apr 13 19:27:48.895623 kubelet[3245]: I0413 19:27:48.894659 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8ec93d19-7756-4a87-b547-96502fce5e09-typha-certs\") pod \"calico-typha-84cf858875-kbdsp\" (UID: \"8ec93d19-7756-4a87-b547-96502fce5e09\") " pod="calico-system/calico-typha-84cf858875-kbdsp" Apr 13 19:27:48.986968 systemd[1]: Created slice kubepods-besteffort-pod0064cff9_7eca_44e8_8cbb_9b462c7b8d8d.slice - libcontainer container kubepods-besteffort-pod0064cff9_7eca_44e8_8cbb_9b462c7b8d8d.slice. Apr 13 19:27:48.994959 kubelet[3245]: I0413 19:27:48.994813 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-policysync\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995093 kubelet[3245]: I0413 19:27:48.994966 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-sys-fs\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995093 kubelet[3245]: I0413 19:27:48.995000 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qrf\" (UniqueName: \"kubernetes.io/projected/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-kube-api-access-r2qrf\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995093 kubelet[3245]: I0413 19:27:48.995019 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-cni-net-dir\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995093 kubelet[3245]: I0413 19:27:48.995033 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-nodeproc\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995093 kubelet[3245]: I0413 19:27:48.995047 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-xtables-lock\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995213 kubelet[3245]: I0413 19:27:48.995063 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-var-lib-calico\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995213 kubelet[3245]: I0413 19:27:48.995087 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-var-run-calico\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995213 kubelet[3245]: I0413 19:27:48.995112 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-cni-log-dir\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995213 kubelet[3245]: I0413 19:27:48.995127 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-tigera-ca-bundle\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995213 kubelet[3245]: I0413 19:27:48.995144 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-cni-bin-dir\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995327 kubelet[3245]: I0413 19:27:48.995158 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-bpffs\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995327 kubelet[3245]: I0413 19:27:48.995172 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-flexvol-driver-host\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995327 kubelet[3245]: I0413 19:27:48.995186 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-node-certs\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:48.995442 kubelet[3245]: I0413 19:27:48.995423 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0064cff9-7eca-44e8-8cbb-9b462c7b8d8d-lib-modules\") pod \"calico-node-wdp5z\" (UID: \"0064cff9-7eca-44e8-8cbb-9b462c7b8d8d\") " pod="calico-system/calico-node-wdp5z" Apr 13 19:27:49.101308 kubelet[3245]: E0413 19:27:49.100300 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.109935 kubelet[3245]: W0413 19:27:49.101422 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.101483 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.101775 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.109935 kubelet[3245]: W0413 19:27:49.101785 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.101796 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.102252 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.109935 kubelet[3245]: W0413 19:27:49.102264 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.102276 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.109935 kubelet[3245]: E0413 19:27:49.102620 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.109935 kubelet[3245]: W0413 19:27:49.102630 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.110672 kubelet[3245]: E0413 19:27:49.102641 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.113610 kubelet[3245]: E0413 19:27:49.111934 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:49.120089 kubelet[3245]: E0413 19:27:49.120070 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.120188 kubelet[3245]: W0413 19:27:49.120174 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.120263 kubelet[3245]: E0413 19:27:49.120253 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.121621 kubelet[3245]: E0413 19:27:49.121599 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.121621 kubelet[3245]: W0413 19:27:49.121614 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.121737 kubelet[3245]: E0413 19:27:49.121628 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.121997 kubelet[3245]: E0413 19:27:49.121979 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.121997 kubelet[3245]: W0413 19:27:49.121996 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.122073 kubelet[3245]: E0413 19:27:49.122009 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.186452 containerd[1736]: time="2026-04-13T19:27:49.186409910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cf858875-kbdsp,Uid:8ec93d19-7756-4a87-b547-96502fce5e09,Namespace:calico-system,Attempt:0,}" Apr 13 19:27:49.187385 kubelet[3245]: E0413 19:27:49.187199 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.187385 kubelet[3245]: W0413 19:27:49.187237 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.187385 kubelet[3245]: E0413 19:27:49.187284 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.187858 kubelet[3245]: E0413 19:27:49.187473 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.187858 kubelet[3245]: W0413 19:27:49.187497 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.187858 kubelet[3245]: E0413 19:27:49.187508 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.187858 kubelet[3245]: E0413 19:27:49.187679 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.187858 kubelet[3245]: W0413 19:27:49.187700 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.187858 kubelet[3245]: E0413 19:27:49.187712 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.188265 kubelet[3245]: E0413 19:27:49.188132 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.188265 kubelet[3245]: W0413 19:27:49.188154 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.188265 kubelet[3245]: E0413 19:27:49.188166 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.188533 kubelet[3245]: E0413 19:27:49.188353 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.188533 kubelet[3245]: W0413 19:27:49.188362 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.188533 kubelet[3245]: E0413 19:27:49.188371 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.188805 kubelet[3245]: E0413 19:27:49.188684 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.188805 kubelet[3245]: W0413 19:27:49.188695 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.188805 kubelet[3245]: E0413 19:27:49.188717 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.189027 kubelet[3245]: E0413 19:27:49.188915 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.189027 kubelet[3245]: W0413 19:27:49.188924 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.189027 kubelet[3245]: E0413 19:27:49.188934 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.189374 kubelet[3245]: E0413 19:27:49.189247 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.189374 kubelet[3245]: W0413 19:27:49.189268 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.189374 kubelet[3245]: E0413 19:27:49.189281 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.189619 kubelet[3245]: E0413 19:27:49.189535 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.189619 kubelet[3245]: W0413 19:27:49.189546 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.189619 kubelet[3245]: E0413 19:27:49.189556 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.189996 kubelet[3245]: E0413 19:27:49.189890 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.189996 kubelet[3245]: W0413 19:27:49.189901 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.189996 kubelet[3245]: E0413 19:27:49.189912 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.190219 kubelet[3245]: E0413 19:27:49.190156 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.190219 kubelet[3245]: W0413 19:27:49.190166 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.190219 kubelet[3245]: E0413 19:27:49.190175 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.190603 kubelet[3245]: E0413 19:27:49.190486 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.190603 kubelet[3245]: W0413 19:27:49.190497 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.190603 kubelet[3245]: E0413 19:27:49.190512 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.191230 kubelet[3245]: E0413 19:27:49.191119 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.191230 kubelet[3245]: W0413 19:27:49.191130 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.191230 kubelet[3245]: E0413 19:27:49.191141 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.191505 kubelet[3245]: E0413 19:27:49.191400 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.191505 kubelet[3245]: W0413 19:27:49.191410 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.191505 kubelet[3245]: E0413 19:27:49.191420 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.191855 kubelet[3245]: E0413 19:27:49.191720 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.191855 kubelet[3245]: W0413 19:27:49.191732 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.191855 kubelet[3245]: E0413 19:27:49.191742 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.192123 kubelet[3245]: E0413 19:27:49.192053 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.192123 kubelet[3245]: W0413 19:27:49.192063 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.192123 kubelet[3245]: E0413 19:27:49.192073 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.192455 kubelet[3245]: E0413 19:27:49.192368 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.192455 kubelet[3245]: W0413 19:27:49.192379 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.192455 kubelet[3245]: E0413 19:27:49.192388 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.192773 kubelet[3245]: E0413 19:27:49.192666 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.192773 kubelet[3245]: W0413 19:27:49.192675 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.192773 kubelet[3245]: E0413 19:27:49.192685 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.193131 kubelet[3245]: E0413 19:27:49.193120 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.193237 kubelet[3245]: W0413 19:27:49.193181 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.193237 kubelet[3245]: E0413 19:27:49.193198 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.193559 kubelet[3245]: E0413 19:27:49.193448 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.193559 kubelet[3245]: W0413 19:27:49.193458 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.193559 kubelet[3245]: E0413 19:27:49.193468 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.196898 kubelet[3245]: E0413 19:27:49.196884 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.197097 kubelet[3245]: W0413 19:27:49.196977 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.197097 kubelet[3245]: E0413 19:27:49.197018 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.197097 kubelet[3245]: I0413 19:27:49.197044 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20dfe0a1-3f3d-4996-8368-9e8b44bb53cd-registration-dir\") pod \"csi-node-driver-ghjt9\" (UID: \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\") " pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:27:49.197424 kubelet[3245]: E0413 19:27:49.197352 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.197424 kubelet[3245]: W0413 19:27:49.197364 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.197424 kubelet[3245]: E0413 19:27:49.197377 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.197424 kubelet[3245]: I0413 19:27:49.197398 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20dfe0a1-3f3d-4996-8368-9e8b44bb53cd-kubelet-dir\") pod \"csi-node-driver-ghjt9\" (UID: \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\") " pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:27:49.197647 kubelet[3245]: E0413 19:27:49.197623 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.197699 kubelet[3245]: W0413 19:27:49.197660 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.197699 kubelet[3245]: E0413 19:27:49.197676 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.197919 kubelet[3245]: E0413 19:27:49.197906 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.197919 kubelet[3245]: W0413 19:27:49.197917 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198007 kubelet[3245]: E0413 19:27:49.197926 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.198098 kubelet[3245]: E0413 19:27:49.198085 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.198098 kubelet[3245]: W0413 19:27:49.198096 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198153 kubelet[3245]: E0413 19:27:49.198104 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.198153 kubelet[3245]: I0413 19:27:49.198127 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20dfe0a1-3f3d-4996-8368-9e8b44bb53cd-socket-dir\") pod \"csi-node-driver-ghjt9\" (UID: \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\") " pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:27:49.198288 kubelet[3245]: E0413 19:27:49.198275 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.198288 kubelet[3245]: W0413 19:27:49.198286 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198367 kubelet[3245]: E0413 19:27:49.198295 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.198367 kubelet[3245]: I0413 19:27:49.198313 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20dfe0a1-3f3d-4996-8368-9e8b44bb53cd-varrun\") pod \"csi-node-driver-ghjt9\" (UID: \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\") " pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:27:49.198459 kubelet[3245]: E0413 19:27:49.198444 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.198459 kubelet[3245]: W0413 19:27:49.198455 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198517 kubelet[3245]: E0413 19:27:49.198465 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.198517 kubelet[3245]: I0413 19:27:49.198484 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfrb\" (UniqueName: \"kubernetes.io/projected/20dfe0a1-3f3d-4996-8368-9e8b44bb53cd-kube-api-access-gdfrb\") pod \"csi-node-driver-ghjt9\" (UID: \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\") " pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:27:49.198852 kubelet[3245]: E0413 19:27:49.198707 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.198852 kubelet[3245]: W0413 19:27:49.198719 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198852 kubelet[3245]: E0413 19:27:49.198729 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.198988 kubelet[3245]: E0413 19:27:49.198876 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.198988 kubelet[3245]: W0413 19:27:49.198884 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.198988 kubelet[3245]: E0413 19:27:49.198892 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199078 kubelet[3245]: E0413 19:27:49.199034 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199078 kubelet[3245]: W0413 19:27:49.199041 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199078 kubelet[3245]: E0413 19:27:49.199049 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199205 kubelet[3245]: E0413 19:27:49.199191 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199205 kubelet[3245]: W0413 19:27:49.199201 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199279 kubelet[3245]: E0413 19:27:49.199210 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199346 kubelet[3245]: E0413 19:27:49.199334 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199346 kubelet[3245]: W0413 19:27:49.199344 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199426 kubelet[3245]: E0413 19:27:49.199352 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199500 kubelet[3245]: E0413 19:27:49.199488 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199538 kubelet[3245]: W0413 19:27:49.199500 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199538 kubelet[3245]: E0413 19:27:49.199508 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199675 kubelet[3245]: E0413 19:27:49.199663 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199675 kubelet[3245]: W0413 19:27:49.199673 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199750 kubelet[3245]: E0413 19:27:49.199682 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.199834 kubelet[3245]: E0413 19:27:49.199822 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.199834 kubelet[3245]: W0413 19:27:49.199832 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.199894 kubelet[3245]: E0413 19:27:49.199841 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.230699 containerd[1736]: time="2026-04-13T19:27:49.230620321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:49.231216 containerd[1736]: time="2026-04-13T19:27:49.231008720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:49.231216 containerd[1736]: time="2026-04-13T19:27:49.231092800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:49.231318 containerd[1736]: time="2026-04-13T19:27:49.231203519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:49.252798 systemd[1]: Started cri-containerd-71385804984088452a0ab3dab5997750ec13df671067462c9032c04ca986f5f4.scope - libcontainer container 71385804984088452a0ab3dab5997750ec13df671067462c9032c04ca986f5f4. Apr 13 19:27:49.284557 containerd[1736]: time="2026-04-13T19:27:49.284521668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cf858875-kbdsp,Uid:8ec93d19-7756-4a87-b547-96502fce5e09,Namespace:calico-system,Attempt:0,} returns sandbox id \"71385804984088452a0ab3dab5997750ec13df671067462c9032c04ca986f5f4\"" Apr 13 19:27:49.286638 containerd[1736]: time="2026-04-13T19:27:49.286434503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.299994 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.300467 kubelet[3245]: W0413 19:27:49.300016 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.300037 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.300223 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.300467 kubelet[3245]: W0413 19:27:49.300235 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.300246 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.300388 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.300467 kubelet[3245]: W0413 19:27:49.300396 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.300467 kubelet[3245]: E0413 19:27:49.300404 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.301349 kubelet[3245]: E0413 19:27:49.301331 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.301349 kubelet[3245]: W0413 19:27:49.301345 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.301425 kubelet[3245]: E0413 19:27:49.301355 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.301574 kubelet[3245]: E0413 19:27:49.301558 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.301574 kubelet[3245]: W0413 19:27:49.301570 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.301659 kubelet[3245]: E0413 19:27:49.301580 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.301987 containerd[1736]: time="2026-04-13T19:27:49.301949865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wdp5z,Uid:0064cff9-7eca-44e8-8cbb-9b462c7b8d8d,Namespace:calico-system,Attempt:0,}" Apr 13 19:27:49.302293 kubelet[3245]: E0413 19:27:49.302274 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.302293 kubelet[3245]: W0413 19:27:49.302291 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.302352 kubelet[3245]: E0413 19:27:49.302305 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.302631 kubelet[3245]: E0413 19:27:49.302564 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.302631 kubelet[3245]: W0413 19:27:49.302578 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.302631 kubelet[3245]: E0413 19:27:49.302608 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.302916 kubelet[3245]: E0413 19:27:49.302830 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.302916 kubelet[3245]: W0413 19:27:49.302838 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.302916 kubelet[3245]: E0413 19:27:49.302847 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.303091 kubelet[3245]: E0413 19:27:49.303022 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.303091 kubelet[3245]: W0413 19:27:49.303030 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.303091 kubelet[3245]: E0413 19:27:49.303038 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.303257 kubelet[3245]: E0413 19:27:49.303207 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.303257 kubelet[3245]: W0413 19:27:49.303214 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.303257 kubelet[3245]: E0413 19:27:49.303222 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.303403 kubelet[3245]: E0413 19:27:49.303385 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.303403 kubelet[3245]: W0413 19:27:49.303392 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.303403 kubelet[3245]: E0413 19:27:49.303400 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.303656 kubelet[3245]: E0413 19:27:49.303638 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.303656 kubelet[3245]: W0413 19:27:49.303652 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.303742 kubelet[3245]: E0413 19:27:49.303661 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.304004 kubelet[3245]: E0413 19:27:49.303940 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.304004 kubelet[3245]: W0413 19:27:49.303956 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.304004 kubelet[3245]: E0413 19:27:49.303984 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.304429 kubelet[3245]: E0413 19:27:49.304409 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.304429 kubelet[3245]: W0413 19:27:49.304426 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.305207 kubelet[3245]: E0413 19:27:49.304440 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.305498 kubelet[3245]: E0413 19:27:49.305474 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.305498 kubelet[3245]: W0413 19:27:49.305491 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.305709 kubelet[3245]: E0413 19:27:49.305506 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.305836 kubelet[3245]: E0413 19:27:49.305819 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.305836 kubelet[3245]: W0413 19:27:49.305834 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.305913 kubelet[3245]: E0413 19:27:49.305850 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.306392 kubelet[3245]: E0413 19:27:49.306373 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.306392 kubelet[3245]: W0413 19:27:49.306390 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.306392 kubelet[3245]: E0413 19:27:49.306416 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.306863 kubelet[3245]: E0413 19:27:49.306685 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.306863 kubelet[3245]: W0413 19:27:49.306698 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.306863 kubelet[3245]: E0413 19:27:49.306711 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.306863 kubelet[3245]: E0413 19:27:49.306876 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.306985 kubelet[3245]: W0413 19:27:49.306884 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.306985 kubelet[3245]: E0413 19:27:49.306977 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.308449 kubelet[3245]: E0413 19:27:49.307197 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.308449 kubelet[3245]: W0413 19:27:49.307207 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.308449 kubelet[3245]: E0413 19:27:49.307230 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.308541 kubelet[3245]: E0413 19:27:49.308524 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.308541 kubelet[3245]: W0413 19:27:49.308538 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.308610 kubelet[3245]: E0413 19:27:49.308551 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.308816 kubelet[3245]: E0413 19:27:49.308802 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.308854 kubelet[3245]: W0413 19:27:49.308815 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.308854 kubelet[3245]: E0413 19:27:49.308836 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.309048 kubelet[3245]: E0413 19:27:49.309036 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.309082 kubelet[3245]: W0413 19:27:49.309047 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.309082 kubelet[3245]: E0413 19:27:49.309061 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.309286 kubelet[3245]: E0413 19:27:49.309271 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.309286 kubelet[3245]: W0413 19:27:49.309284 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.309358 kubelet[3245]: E0413 19:27:49.309297 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.309507 kubelet[3245]: E0413 19:27:49.309494 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.309507 kubelet[3245]: W0413 19:27:49.309505 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.309576 kubelet[3245]: E0413 19:27:49.309517 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.316080 kubelet[3245]: E0413 19:27:49.316057 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:49.316080 kubelet[3245]: W0413 19:27:49.316075 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:49.316177 kubelet[3245]: E0413 19:27:49.316091 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:49.359021 containerd[1736]: time="2026-04-13T19:27:49.358938244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:49.359021 containerd[1736]: time="2026-04-13T19:27:49.358994444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:49.359449 containerd[1736]: time="2026-04-13T19:27:49.359220403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:49.359449 containerd[1736]: time="2026-04-13T19:27:49.359366083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:49.377771 systemd[1]: Started cri-containerd-4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc.scope - libcontainer container 4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc. Apr 13 19:27:49.398860 containerd[1736]: time="2026-04-13T19:27:49.398785465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wdp5z,Uid:0064cff9-7eca-44e8-8cbb-9b462c7b8d8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\"" Apr 13 19:27:50.583433 kubelet[3245]: E0413 19:27:50.582908 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:50.852417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106881644.mount: Deactivated successfully. Apr 13 19:27:51.297552 containerd[1736]: time="2026-04-13T19:27:51.296809337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:51.301134 containerd[1736]: time="2026-04-13T19:27:51.301095926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:27:51.304980 containerd[1736]: time="2026-04-13T19:27:51.304928997Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:51.316682 containerd[1736]: time="2026-04-13T19:27:51.315879490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:51.316682 containerd[1736]: time="2026-04-13T19:27:51.316545808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.030075825s" Apr 13 19:27:51.316682 containerd[1736]: time="2026-04-13T19:27:51.316576488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:27:51.319789 containerd[1736]: time="2026-04-13T19:27:51.318919122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:27:51.335708 containerd[1736]: time="2026-04-13T19:27:51.335662161Z" level=info msg="CreateContainer within sandbox \"71385804984088452a0ab3dab5997750ec13df671067462c9032c04ca986f5f4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:27:51.382658 containerd[1736]: time="2026-04-13T19:27:51.381895767Z" level=info msg="CreateContainer within sandbox \"71385804984088452a0ab3dab5997750ec13df671067462c9032c04ca986f5f4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"07eab9784f0b7e1b6511706986d716eceacd7038bd974738a2e21efb3943d601\"" Apr 13 19:27:51.382954 containerd[1736]: time="2026-04-13T19:27:51.382902324Z" level=info msg="StartContainer for \"07eab9784f0b7e1b6511706986d716eceacd7038bd974738a2e21efb3943d601\"" Apr 13 19:27:51.415787 systemd[1]: Started cri-containerd-07eab9784f0b7e1b6511706986d716eceacd7038bd974738a2e21efb3943d601.scope - libcontainer container 07eab9784f0b7e1b6511706986d716eceacd7038bd974738a2e21efb3943d601. Apr 13 19:27:51.461117 containerd[1736]: time="2026-04-13T19:27:51.461059251Z" level=info msg="StartContainer for \"07eab9784f0b7e1b6511706986d716eceacd7038bd974738a2e21efb3943d601\" returns successfully" Apr 13 19:27:51.708889 kubelet[3245]: E0413 19:27:51.708857 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.709783 kubelet[3245]: W0413 19:27:51.709628 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.709783 kubelet[3245]: E0413 19:27:51.709667 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.710781 kubelet[3245]: E0413 19:27:51.710700 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.710781 kubelet[3245]: W0413 19:27:51.710718 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.710781 kubelet[3245]: E0413 19:27:51.710733 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.712286 kubelet[3245]: E0413 19:27:51.712135 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.712286 kubelet[3245]: W0413 19:27:51.712152 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.712286 kubelet[3245]: E0413 19:27:51.712171 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.712612 kubelet[3245]: E0413 19:27:51.712412 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.712612 kubelet[3245]: W0413 19:27:51.712422 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.712612 kubelet[3245]: E0413 19:27:51.712446 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.713622 kubelet[3245]: E0413 19:27:51.713064 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.713622 kubelet[3245]: W0413 19:27:51.713078 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.713622 kubelet[3245]: E0413 19:27:51.713116 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.713965 kubelet[3245]: E0413 19:27:51.713847 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.713965 kubelet[3245]: W0413 19:27:51.713861 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.713965 kubelet[3245]: E0413 19:27:51.713872 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.714618 kubelet[3245]: E0413 19:27:51.714278 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.714618 kubelet[3245]: W0413 19:27:51.714303 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.714618 kubelet[3245]: E0413 19:27:51.714316 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.716581 kubelet[3245]: E0413 19:27:51.714856 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.716581 kubelet[3245]: W0413 19:27:51.714868 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.716581 kubelet[3245]: E0413 19:27:51.714883 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.716581 kubelet[3245]: E0413 19:27:51.715518 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.716581 kubelet[3245]: W0413 19:27:51.715529 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.716581 kubelet[3245]: E0413 19:27:51.715610 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.716960 kubelet[3245]: E0413 19:27:51.716853 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.716960 kubelet[3245]: W0413 19:27:51.716867 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.716960 kubelet[3245]: E0413 19:27:51.716879 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.717230 kubelet[3245]: E0413 19:27:51.717171 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.717230 kubelet[3245]: W0413 19:27:51.717182 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.717230 kubelet[3245]: E0413 19:27:51.717192 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.717576 kubelet[3245]: E0413 19:27:51.717473 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.717576 kubelet[3245]: W0413 19:27:51.717484 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.717576 kubelet[3245]: E0413 19:27:51.717494 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.719615 kubelet[3245]: E0413 19:27:51.719483 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.719615 kubelet[3245]: W0413 19:27:51.719497 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.719615 kubelet[3245]: E0413 19:27:51.719510 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.719936 kubelet[3245]: E0413 19:27:51.719924 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.720040 kubelet[3245]: W0413 19:27:51.720004 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.720040 kubelet[3245]: E0413 19:27:51.720021 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.720442 kubelet[3245]: E0413 19:27:51.720299 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.720442 kubelet[3245]: W0413 19:27:51.720311 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.720442 kubelet[3245]: E0413 19:27:51.720322 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.720763 kubelet[3245]: E0413 19:27:51.720627 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.720763 kubelet[3245]: W0413 19:27:51.720644 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.720763 kubelet[3245]: E0413 19:27:51.720656 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.721281 kubelet[3245]: E0413 19:27:51.720870 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.721281 kubelet[3245]: W0413 19:27:51.720879 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.721281 kubelet[3245]: E0413 19:27:51.720889 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.721828 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.723276 kubelet[3245]: W0413 19:27:51.721879 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.721892 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.722426 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.723276 kubelet[3245]: W0413 19:27:51.722438 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.722451 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.722784 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.723276 kubelet[3245]: W0413 19:27:51.722795 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.723276 kubelet[3245]: E0413 19:27:51.722806 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.724958 kubelet[3245]: E0413 19:27:51.724402 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.724958 kubelet[3245]: W0413 19:27:51.724419 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.724958 kubelet[3245]: E0413 19:27:51.724438 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.725711 kubelet[3245]: E0413 19:27:51.725185 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.725711 kubelet[3245]: W0413 19:27:51.725215 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.725711 kubelet[3245]: E0413 19:27:51.725229 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.726098 kubelet[3245]: E0413 19:27:51.726071 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.726176 kubelet[3245]: W0413 19:27:51.726084 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.726176 kubelet[3245]: E0413 19:27:51.726158 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.727095 kubelet[3245]: E0413 19:27:51.727044 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.727095 kubelet[3245]: W0413 19:27:51.727058 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.727095 kubelet[3245]: E0413 19:27:51.727070 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.728881 kubelet[3245]: E0413 19:27:51.727913 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.728881 kubelet[3245]: W0413 19:27:51.727925 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.728881 kubelet[3245]: E0413 19:27:51.727938 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.729281 kubelet[3245]: E0413 19:27:51.728999 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.729281 kubelet[3245]: W0413 19:27:51.729085 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.729281 kubelet[3245]: E0413 19:27:51.729098 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.730200 kubelet[3245]: E0413 19:27:51.730182 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.730540 kubelet[3245]: W0413 19:27:51.730435 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.730540 kubelet[3245]: E0413 19:27:51.730454 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.731562 kubelet[3245]: E0413 19:27:51.731342 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.731562 kubelet[3245]: W0413 19:27:51.731356 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.731562 kubelet[3245]: E0413 19:27:51.731368 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.732941 kubelet[3245]: E0413 19:27:51.732784 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.732941 kubelet[3245]: W0413 19:27:51.732796 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.732941 kubelet[3245]: E0413 19:27:51.732807 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.736600 kubelet[3245]: E0413 19:27:51.736443 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.736600 kubelet[3245]: W0413 19:27:51.736460 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.736600 kubelet[3245]: E0413 19:27:51.736476 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.738795 kubelet[3245]: E0413 19:27:51.738782 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.739034 kubelet[3245]: W0413 19:27:51.739017 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.739476 kubelet[3245]: E0413 19:27:51.739457 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.739931 kubelet[3245]: E0413 19:27:51.739918 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.740165 kubelet[3245]: W0413 19:27:51.740018 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.740165 kubelet[3245]: E0413 19:27:51.740035 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:51.741216 kubelet[3245]: E0413 19:27:51.741154 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:51.741216 kubelet[3245]: W0413 19:27:51.741169 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:51.741216 kubelet[3245]: E0413 19:27:51.741181 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.582611 kubelet[3245]: E0413 19:27:52.582562 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:52.666662 kubelet[3245]: I0413 19:27:52.666320 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:27:52.729202 kubelet[3245]: E0413 19:27:52.729064 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.729202 kubelet[3245]: W0413 19:27:52.729092 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.729202 kubelet[3245]: E0413 19:27:52.729117 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.730322 kubelet[3245]: E0413 19:27:52.729875 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.730322 kubelet[3245]: W0413 19:27:52.729889 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.730322 kubelet[3245]: E0413 19:27:52.729908 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.730322 kubelet[3245]: E0413 19:27:52.730102 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.730322 kubelet[3245]: W0413 19:27:52.730116 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.730322 kubelet[3245]: E0413 19:27:52.730126 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.730738 kubelet[3245]: E0413 19:27:52.730461 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.730738 kubelet[3245]: W0413 19:27:52.730471 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.730738 kubelet[3245]: E0413 19:27:52.730483 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.731074 kubelet[3245]: E0413 19:27:52.731062 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.731351 kubelet[3245]: W0413 19:27:52.731132 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.731351 kubelet[3245]: E0413 19:27:52.731146 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.731826 kubelet[3245]: E0413 19:27:52.731680 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.731826 kubelet[3245]: W0413 19:27:52.731693 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.731826 kubelet[3245]: E0413 19:27:52.731703 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.732203 kubelet[3245]: E0413 19:27:52.732024 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.732203 kubelet[3245]: W0413 19:27:52.732036 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.732203 kubelet[3245]: E0413 19:27:52.732046 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.732577 kubelet[3245]: E0413 19:27:52.732426 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.732577 kubelet[3245]: W0413 19:27:52.732441 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.732577 kubelet[3245]: E0413 19:27:52.732452 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.733055 kubelet[3245]: E0413 19:27:52.732946 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.733055 kubelet[3245]: W0413 19:27:52.732960 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.733055 kubelet[3245]: E0413 19:27:52.732975 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.733458 kubelet[3245]: E0413 19:27:52.733347 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.733458 kubelet[3245]: W0413 19:27:52.733358 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.733458 kubelet[3245]: E0413 19:27:52.733368 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.733784 kubelet[3245]: E0413 19:27:52.733693 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.733784 kubelet[3245]: W0413 19:27:52.733705 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.733784 kubelet[3245]: E0413 19:27:52.733715 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.734457 kubelet[3245]: E0413 19:27:52.734257 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.734457 kubelet[3245]: W0413 19:27:52.734274 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.734457 kubelet[3245]: E0413 19:27:52.734288 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.734753 kubelet[3245]: E0413 19:27:52.734647 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.734753 kubelet[3245]: W0413 19:27:52.734659 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.734753 kubelet[3245]: E0413 19:27:52.734669 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.735206 kubelet[3245]: E0413 19:27:52.735081 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.735206 kubelet[3245]: W0413 19:27:52.735093 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.735206 kubelet[3245]: E0413 19:27:52.735103 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.735566 kubelet[3245]: E0413 19:27:52.735390 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.735566 kubelet[3245]: W0413 19:27:52.735400 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.735566 kubelet[3245]: E0413 19:27:52.735410 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.735799 kubelet[3245]: E0413 19:27:52.735787 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.735862 kubelet[3245]: W0413 19:27:52.735851 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.735912 kubelet[3245]: E0413 19:27:52.735903 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.736346 kubelet[3245]: E0413 19:27:52.736333 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.736465 kubelet[3245]: W0413 19:27:52.736452 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.736573 kubelet[3245]: E0413 19:27:52.736537 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.737149 kubelet[3245]: E0413 19:27:52.737126 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.737149 kubelet[3245]: W0413 19:27:52.737144 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.737230 kubelet[3245]: E0413 19:27:52.737158 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.737657 kubelet[3245]: E0413 19:27:52.737626 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.737657 kubelet[3245]: W0413 19:27:52.737642 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.737657 kubelet[3245]: E0413 19:27:52.737654 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.737954 kubelet[3245]: E0413 19:27:52.737938 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.737954 kubelet[3245]: W0413 19:27:52.737952 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.738012 kubelet[3245]: E0413 19:27:52.737980 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.738446 kubelet[3245]: E0413 19:27:52.738422 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.738446 kubelet[3245]: W0413 19:27:52.738439 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.738617 kubelet[3245]: E0413 19:27:52.738451 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.738879 kubelet[3245]: E0413 19:27:52.738863 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.738879 kubelet[3245]: W0413 19:27:52.738879 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.738944 kubelet[3245]: E0413 19:27:52.738895 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.739121 kubelet[3245]: E0413 19:27:52.739108 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.739121 kubelet[3245]: W0413 19:27:52.739120 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.739199 kubelet[3245]: E0413 19:27:52.739131 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.739965 kubelet[3245]: E0413 19:27:52.739945 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.739965 kubelet[3245]: W0413 19:27:52.739962 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.740100 kubelet[3245]: E0413 19:27:52.739974 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.740130 kubelet[3245]: E0413 19:27:52.740123 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.740155 kubelet[3245]: W0413 19:27:52.740131 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.740155 kubelet[3245]: E0413 19:27:52.740139 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.740316 kubelet[3245]: E0413 19:27:52.740303 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.740316 kubelet[3245]: W0413 19:27:52.740315 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.740382 kubelet[3245]: E0413 19:27:52.740326 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.740547 kubelet[3245]: E0413 19:27:52.740532 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.740547 kubelet[3245]: W0413 19:27:52.740544 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.740649 kubelet[3245]: E0413 19:27:52.740554 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.740967 kubelet[3245]: E0413 19:27:52.740949 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.740967 kubelet[3245]: W0413 19:27:52.740965 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.741037 kubelet[3245]: E0413 19:27:52.740976 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.741781 kubelet[3245]: E0413 19:27:52.741758 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.741781 kubelet[3245]: W0413 19:27:52.741774 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.741781 kubelet[3245]: E0413 19:27:52.741785 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.741971 kubelet[3245]: E0413 19:27:52.741952 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.741971 kubelet[3245]: W0413 19:27:52.741964 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.742036 kubelet[3245]: E0413 19:27:52.741973 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.742420 kubelet[3245]: E0413 19:27:52.742370 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.742420 kubelet[3245]: W0413 19:27:52.742381 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.742420 kubelet[3245]: E0413 19:27:52.742390 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.742740 kubelet[3245]: E0413 19:27:52.742723 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.742740 kubelet[3245]: W0413 19:27:52.742738 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.742916 kubelet[3245]: E0413 19:27:52.742749 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.743110 kubelet[3245]: E0413 19:27:52.743030 3245 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:27:52.743110 kubelet[3245]: W0413 19:27:52.743042 3245 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:27:52.743110 kubelet[3245]: E0413 19:27:52.743052 3245 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:27:52.799737 containerd[1736]: time="2026-04-13T19:27:52.798970626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:52.803230 containerd[1736]: time="2026-04-13T19:27:52.803192135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:27:52.807006 containerd[1736]: time="2026-04-13T19:27:52.806956446Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:52.812009 containerd[1736]: time="2026-04-13T19:27:52.811956434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:52.812883 containerd[1736]: time="2026-04-13T19:27:52.812570552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.49360403s" Apr 13 19:27:52.812883 containerd[1736]: time="2026-04-13T19:27:52.812618992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:27:52.821336 containerd[1736]: time="2026-04-13T19:27:52.821304371Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:27:52.876328 containerd[1736]: time="2026-04-13T19:27:52.876002676Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629\"" Apr 13 19:27:52.877355 containerd[1736]: time="2026-04-13T19:27:52.876645194Z" level=info msg="StartContainer for \"0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629\"" Apr 13 19:27:52.906818 systemd[1]: Started cri-containerd-0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629.scope - libcontainer container 0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629. Apr 13 19:27:52.934516 containerd[1736]: time="2026-04-13T19:27:52.934463851Z" level=info msg="StartContainer for \"0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629\" returns successfully" Apr 13 19:27:52.939737 systemd[1]: cri-containerd-0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629.scope: Deactivated successfully. Apr 13 19:27:52.959171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629-rootfs.mount: Deactivated successfully. Apr 13 19:27:53.692908 kubelet[3245]: I0413 19:27:53.692080 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-84cf858875-kbdsp" podStartSLOduration=3.660567951 podStartE2EDuration="5.692067893s" podCreationTimestamp="2026-04-13 19:27:48 +0000 UTC" firstStartedPulling="2026-04-13 19:27:49.285942944 +0000 UTC m=+22.815234741" lastFinishedPulling="2026-04-13 19:27:51.317442886 +0000 UTC m=+24.846734683" observedRunningTime="2026-04-13 19:27:51.738104887 +0000 UTC m=+25.267396684" watchObservedRunningTime="2026-04-13 19:27:53.692067893 +0000 UTC m=+27.221359690" Apr 13 19:27:54.072115 containerd[1736]: time="2026-04-13T19:27:54.071994507Z" level=info msg="shim disconnected" id=0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629 namespace=k8s.io Apr 13 19:27:54.072838 containerd[1736]: time="2026-04-13T19:27:54.072691465Z" level=warning msg="cleaning up after shim disconnected" id=0b905ae3fe8c34f470b8ac7706091a3950d9bbb19181b04d66becf281c4f4629 namespace=k8s.io Apr 13 19:27:54.072838 containerd[1736]: time="2026-04-13T19:27:54.072714225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:54.583883 kubelet[3245]: E0413 19:27:54.583796 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:54.674376 containerd[1736]: time="2026-04-13T19:27:54.674326287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:27:56.585109 kubelet[3245]: E0413 19:27:56.585023 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:58.583676 kubelet[3245]: E0413 19:27:58.583183 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:27:59.370963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279424476.mount: Deactivated successfully. Apr 13 19:27:59.485100 containerd[1736]: time="2026-04-13T19:27:59.484313792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.487975 containerd[1736]: time="2026-04-13T19:27:59.487933783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:27:59.491716 containerd[1736]: time="2026-04-13T19:27:59.491663374Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.497738 containerd[1736]: time="2026-04-13T19:27:59.496912281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:27:59.497738 containerd[1736]: time="2026-04-13T19:27:59.497610639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.823213312s" Apr 13 19:27:59.497738 containerd[1736]: time="2026-04-13T19:27:59.497639759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:27:59.507028 containerd[1736]: time="2026-04-13T19:27:59.506990016Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:27:59.560083 containerd[1736]: time="2026-04-13T19:27:59.560043004Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b\"" Apr 13 19:27:59.562562 containerd[1736]: time="2026-04-13T19:27:59.560964242Z" level=info msg="StartContainer for \"17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b\"" Apr 13 19:27:59.592148 systemd[1]: Started cri-containerd-17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b.scope - libcontainer container 17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b. Apr 13 19:27:59.620957 containerd[1736]: time="2026-04-13T19:27:59.620912692Z" level=info msg="StartContainer for \"17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b\" returns successfully" Apr 13 19:27:59.655581 systemd[1]: cri-containerd-17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b.scope: Deactivated successfully. Apr 13 19:28:00.371527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b-rootfs.mount: Deactivated successfully. Apr 13 19:28:00.583916 kubelet[3245]: E0413 19:28:00.583831 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:28:01.226878 containerd[1736]: time="2026-04-13T19:28:01.226823414Z" level=info msg="shim disconnected" id=17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b namespace=k8s.io Apr 13 19:28:01.226878 containerd[1736]: time="2026-04-13T19:28:01.226871774Z" level=warning msg="cleaning up after shim disconnected" id=17fa7b05031f2d96921e8604e49b9a96cbbc28100634b017c747b06cfda72f9b namespace=k8s.io Apr 13 19:28:01.226878 containerd[1736]: time="2026-04-13T19:28:01.226879854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:28:01.688981 containerd[1736]: time="2026-04-13T19:28:01.688863280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:28:02.583894 kubelet[3245]: E0413 19:28:02.583854 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:28:04.041165 containerd[1736]: time="2026-04-13T19:28:04.041109537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:04.044927 containerd[1736]: time="2026-04-13T19:28:04.044686848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:28:04.049858 containerd[1736]: time="2026-04-13T19:28:04.049545116Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:04.057272 containerd[1736]: time="2026-04-13T19:28:04.057222296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:04.058302 containerd[1736]: time="2026-04-13T19:28:04.058273374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.369367895s" Apr 13 19:28:04.058364 containerd[1736]: time="2026-04-13T19:28:04.058303573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:28:04.074116 containerd[1736]: time="2026-04-13T19:28:04.073991533Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:28:04.136554 containerd[1736]: time="2026-04-13T19:28:04.136508413Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124\"" Apr 13 19:28:04.138137 containerd[1736]: time="2026-04-13T19:28:04.137041732Z" level=info msg="StartContainer for \"d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124\"" Apr 13 19:28:04.164747 systemd[1]: Started cri-containerd-d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124.scope - libcontainer container d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124. Apr 13 19:28:04.197335 containerd[1736]: time="2026-04-13T19:28:04.196944699Z" level=info msg="StartContainer for \"d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124\" returns successfully" Apr 13 19:28:04.583653 kubelet[3245]: E0413 19:28:04.582809 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:28:05.497229 containerd[1736]: time="2026-04-13T19:28:05.497168370Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:28:05.503018 systemd[1]: cri-containerd-d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124.scope: Deactivated successfully. Apr 13 19:28:05.522269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124-rootfs.mount: Deactivated successfully. Apr 13 19:28:05.541638 kubelet[3245]: I0413 19:28:05.541605 3245 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 19:28:06.366640 systemd[1]: Created slice kubepods-burstable-pod4730c9ea_c5c7_441b_b9a2_363112328d61.slice - libcontainer container kubepods-burstable-pod4730c9ea_c5c7_441b_b9a2_363112328d61.slice. Apr 13 19:28:06.374254 containerd[1736]: time="2026-04-13T19:28:06.374019685Z" level=info msg="shim disconnected" id=d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124 namespace=k8s.io Apr 13 19:28:06.375016 containerd[1736]: time="2026-04-13T19:28:06.374405164Z" level=warning msg="cleaning up after shim disconnected" id=d277a7feb09b108e8a7e7fbf1053748806ec83c8b61e183b004d9dbef8da4124 namespace=k8s.io Apr 13 19:28:06.375016 containerd[1736]: time="2026-04-13T19:28:06.374424484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:28:06.380812 systemd[1]: Created slice kubepods-besteffort-podd41669ea_f411_4baa_9026_f6667bc038e5.slice - libcontainer container kubepods-besteffort-podd41669ea_f411_4baa_9026_f6667bc038e5.slice. Apr 13 19:28:06.399010 systemd[1]: Created slice kubepods-burstable-pode05ad226_2b3e_400d_9a42_c9477b96a890.slice - libcontainer container kubepods-burstable-pode05ad226_2b3e_400d_9a42_c9477b96a890.slice. Apr 13 19:28:06.417128 systemd[1]: Created slice kubepods-besteffort-podef55c60c_7526_45c5_9f34_71ce69563261.slice - libcontainer container kubepods-besteffort-podef55c60c_7526_45c5_9f34_71ce69563261.slice. Apr 13 19:28:06.424481 kubelet[3245]: I0413 19:28:06.423988 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqkm\" (UniqueName: \"kubernetes.io/projected/d41669ea-f411-4baa-9026-f6667bc038e5-kube-api-access-jxqkm\") pod \"calico-kube-controllers-7f7466fddb-kzt5z\" (UID: \"d41669ea-f411-4baa-9026-f6667bc038e5\") " pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" Apr 13 19:28:06.424481 kubelet[3245]: I0413 19:28:06.424027 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12affd5b-ffcb-44a7-8551-5d3b9b496800-calico-apiserver-certs\") pod \"calico-apiserver-774669dd44-td4q9\" (UID: \"12affd5b-ffcb-44a7-8551-5d3b9b496800\") " pod="calico-system/calico-apiserver-774669dd44-td4q9" Apr 13 19:28:06.424481 kubelet[3245]: I0413 19:28:06.424046 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-backend-key-pair\") pod \"whisker-d898cc79b-fl9tk\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:06.424481 kubelet[3245]: I0413 19:28:06.424060 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67679312-cc73-499f-abbf-6475bd30ecd4-config\") pod \"goldmane-9f7667bb8-p57hc\" (UID: \"67679312-cc73-499f-abbf-6475bd30ecd4\") " pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:06.424481 kubelet[3245]: I0413 19:28:06.424075 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzp5\" (UniqueName: \"kubernetes.io/projected/67679312-cc73-499f-abbf-6475bd30ecd4-kube-api-access-9fzp5\") pod \"goldmane-9f7667bb8-p57hc\" (UID: \"67679312-cc73-499f-abbf-6475bd30ecd4\") " pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:06.424907 kubelet[3245]: I0413 19:28:06.424090 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmctz\" (UniqueName: \"kubernetes.io/projected/12affd5b-ffcb-44a7-8551-5d3b9b496800-kube-api-access-dmctz\") pod \"calico-apiserver-774669dd44-td4q9\" (UID: \"12affd5b-ffcb-44a7-8551-5d3b9b496800\") " pod="calico-system/calico-apiserver-774669dd44-td4q9" Apr 13 19:28:06.424907 kubelet[3245]: I0413 19:28:06.424106 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkslf\" (UniqueName: \"kubernetes.io/projected/e05ad226-2b3e-400d-9a42-c9477b96a890-kube-api-access-qkslf\") pod \"coredns-7d764666f9-j6g9p\" (UID: \"e05ad226-2b3e-400d-9a42-c9477b96a890\") " pod="kube-system/coredns-7d764666f9-j6g9p" Apr 13 19:28:06.424907 kubelet[3245]: I0413 19:28:06.424121 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-nginx-config\") pod \"whisker-d898cc79b-fl9tk\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:06.424907 kubelet[3245]: I0413 19:28:06.424137 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsgd\" (UniqueName: \"kubernetes.io/projected/ef55c60c-7526-45c5-9f34-71ce69563261-kube-api-access-gtsgd\") pod \"whisker-d898cc79b-fl9tk\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:06.424907 kubelet[3245]: I0413 19:28:06.424157 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d41669ea-f411-4baa-9026-f6667bc038e5-tigera-ca-bundle\") pod \"calico-kube-controllers-7f7466fddb-kzt5z\" (UID: \"d41669ea-f411-4baa-9026-f6667bc038e5\") " pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" Apr 13 19:28:06.425020 kubelet[3245]: I0413 19:28:06.424171 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05ad226-2b3e-400d-9a42-c9477b96a890-config-volume\") pod \"coredns-7d764666f9-j6g9p\" (UID: \"e05ad226-2b3e-400d-9a42-c9477b96a890\") " pod="kube-system/coredns-7d764666f9-j6g9p" Apr 13 19:28:06.425020 kubelet[3245]: I0413 19:28:06.424191 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdh5v\" (UniqueName: \"kubernetes.io/projected/4730c9ea-c5c7-441b-b9a2-363112328d61-kube-api-access-wdh5v\") pod \"coredns-7d764666f9-8dlfq\" (UID: \"4730c9ea-c5c7-441b-b9a2-363112328d61\") " pod="kube-system/coredns-7d764666f9-8dlfq" Apr 13 19:28:06.425020 kubelet[3245]: I0413 19:28:06.424208 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-ca-bundle\") pod \"whisker-d898cc79b-fl9tk\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:06.425020 kubelet[3245]: I0413 19:28:06.424225 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4730c9ea-c5c7-441b-b9a2-363112328d61-config-volume\") pod \"coredns-7d764666f9-8dlfq\" (UID: \"4730c9ea-c5c7-441b-b9a2-363112328d61\") " pod="kube-system/coredns-7d764666f9-8dlfq" Apr 13 19:28:06.425020 kubelet[3245]: I0413 19:28:06.424239 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/67679312-cc73-499f-abbf-6475bd30ecd4-goldmane-key-pair\") pod \"goldmane-9f7667bb8-p57hc\" (UID: \"67679312-cc73-499f-abbf-6475bd30ecd4\") " pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:06.425126 kubelet[3245]: I0413 19:28:06.424255 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9eae39d-8433-4dd2-b30d-cd79e85825ad-calico-apiserver-certs\") pod \"calico-apiserver-774669dd44-6hkg4\" (UID: \"d9eae39d-8433-4dd2-b30d-cd79e85825ad\") " pod="calico-system/calico-apiserver-774669dd44-6hkg4" Apr 13 19:28:06.425126 kubelet[3245]: I0413 19:28:06.424270 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vfn\" (UniqueName: \"kubernetes.io/projected/d9eae39d-8433-4dd2-b30d-cd79e85825ad-kube-api-access-x5vfn\") pod \"calico-apiserver-774669dd44-6hkg4\" (UID: \"d9eae39d-8433-4dd2-b30d-cd79e85825ad\") " pod="calico-system/calico-apiserver-774669dd44-6hkg4" Apr 13 19:28:06.425126 kubelet[3245]: I0413 19:28:06.424286 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67679312-cc73-499f-abbf-6475bd30ecd4-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-p57hc\" (UID: \"67679312-cc73-499f-abbf-6475bd30ecd4\") " pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:06.426203 systemd[1]: Created slice kubepods-besteffort-pod67679312_cc73_499f_abbf_6475bd30ecd4.slice - libcontainer container kubepods-besteffort-pod67679312_cc73_499f_abbf_6475bd30ecd4.slice. Apr 13 19:28:06.431882 systemd[1]: Created slice kubepods-besteffort-pod12affd5b_ffcb_44a7_8551_5d3b9b496800.slice - libcontainer container kubepods-besteffort-pod12affd5b_ffcb_44a7_8551_5d3b9b496800.slice. Apr 13 19:28:06.439948 systemd[1]: Created slice kubepods-besteffort-podd9eae39d_8433_4dd2_b30d_cd79e85825ad.slice - libcontainer container kubepods-besteffort-podd9eae39d_8433_4dd2_b30d_cd79e85825ad.slice. Apr 13 19:28:06.588890 systemd[1]: Created slice kubepods-besteffort-pod20dfe0a1_3f3d_4996_8368_9e8b44bb53cd.slice - libcontainer container kubepods-besteffort-pod20dfe0a1_3f3d_4996_8368_9e8b44bb53cd.slice. Apr 13 19:28:06.598810 containerd[1736]: time="2026-04-13T19:28:06.598773070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghjt9,Uid:20dfe0a1-3f3d-4996-8368-9e8b44bb53cd,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.681427 containerd[1736]: time="2026-04-13T19:28:06.681073539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dlfq,Uid:4730c9ea-c5c7-441b-b9a2-363112328d61,Namespace:kube-system,Attempt:0,}" Apr 13 19:28:06.684225 containerd[1736]: time="2026-04-13T19:28:06.684187491Z" level=error msg="Failed to destroy network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.684662 containerd[1736]: time="2026-04-13T19:28:06.684633730Z" level=error msg="encountered an error cleaning up failed sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.684792 containerd[1736]: time="2026-04-13T19:28:06.684769929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghjt9,Uid:20dfe0a1-3f3d-4996-8368-9e8b44bb53cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.685058 kubelet[3245]: E0413 19:28:06.685015 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.685126 kubelet[3245]: E0413 19:28:06.685092 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:28:06.685126 kubelet[3245]: E0413 19:28:06.685111 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghjt9" Apr 13 19:28:06.685187 kubelet[3245]: E0413 19:28:06.685170 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghjt9_calico-system(20dfe0a1-3f3d-4996-8368-9e8b44bb53cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghjt9_calico-system(20dfe0a1-3f3d-4996-8368-9e8b44bb53cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:28:06.697262 containerd[1736]: time="2026-04-13T19:28:06.696955698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7466fddb-kzt5z,Uid:d41669ea-f411-4baa-9026-f6667bc038e5,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.703233 kubelet[3245]: I0413 19:28:06.702720 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:06.704855 containerd[1736]: time="2026-04-13T19:28:06.704826358Z" level=info msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" Apr 13 19:28:06.705191 containerd[1736]: time="2026-04-13T19:28:06.705115037Z" level=info msg="Ensure that sandbox 4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a in task-service has been cleanup successfully" Apr 13 19:28:06.719521 containerd[1736]: time="2026-04-13T19:28:06.719486121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j6g9p,Uid:e05ad226-2b3e-400d-9a42-c9477b96a890,Namespace:kube-system,Attempt:0,}" Apr 13 19:28:06.731799 containerd[1736]: time="2026-04-13T19:28:06.731451330Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:28:06.738811 containerd[1736]: time="2026-04-13T19:28:06.738435152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d898cc79b-fl9tk,Uid:ef55c60c-7526-45c5-9f34-71ce69563261,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.741985 containerd[1736]: time="2026-04-13T19:28:06.741489224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-p57hc,Uid:67679312-cc73-499f-abbf-6475bd30ecd4,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.750259 containerd[1736]: time="2026-04-13T19:28:06.750221242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-td4q9,Uid:12affd5b-ffcb-44a7-8551-5d3b9b496800,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.754880 containerd[1736]: time="2026-04-13T19:28:06.754835350Z" level=error msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" failed" error="failed to destroy network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.757412 kubelet[3245]: E0413 19:28:06.755106 3245 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:06.757412 kubelet[3245]: E0413 19:28:06.755155 3245 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a"} Apr 13 19:28:06.757412 kubelet[3245]: E0413 19:28:06.755208 3245 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:28:06.757412 kubelet[3245]: E0413 19:28:06.755241 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghjt9" podUID="20dfe0a1-3f3d-4996-8368-9e8b44bb53cd" Apr 13 19:28:06.762613 containerd[1736]: time="2026-04-13T19:28:06.762472330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-6hkg4,Uid:d9eae39d-8433-4dd2-b30d-cd79e85825ad,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:06.818656 containerd[1736]: time="2026-04-13T19:28:06.818529227Z" level=error msg="Failed to destroy network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.819265 containerd[1736]: time="2026-04-13T19:28:06.819105865Z" level=error msg="encountered an error cleaning up failed sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.819265 containerd[1736]: time="2026-04-13T19:28:06.819162145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dlfq,Uid:4730c9ea-c5c7-441b-b9a2-363112328d61,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.819639 kubelet[3245]: E0413 19:28:06.819602 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.819702 kubelet[3245]: E0413 19:28:06.819658 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8dlfq" Apr 13 19:28:06.819702 kubelet[3245]: E0413 19:28:06.819677 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8dlfq" Apr 13 19:28:06.819781 kubelet[3245]: E0413 19:28:06.819727 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-8dlfq_kube-system(4730c9ea-c5c7-441b-b9a2-363112328d61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-8dlfq_kube-system(4730c9ea-c5c7-441b-b9a2-363112328d61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8dlfq" podUID="4730c9ea-c5c7-441b-b9a2-363112328d61" Apr 13 19:28:06.955433 containerd[1736]: time="2026-04-13T19:28:06.955324157Z" level=error msg="Failed to destroy network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.955885 containerd[1736]: time="2026-04-13T19:28:06.955852715Z" level=error msg="encountered an error cleaning up failed sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.955932 containerd[1736]: time="2026-04-13T19:28:06.955907755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7466fddb-kzt5z,Uid:d41669ea-f411-4baa-9026-f6667bc038e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.956646 kubelet[3245]: E0413 19:28:06.956233 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:06.956646 kubelet[3245]: E0413 19:28:06.956284 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" Apr 13 19:28:06.956646 kubelet[3245]: E0413 19:28:06.956300 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" Apr 13 19:28:06.956805 kubelet[3245]: E0413 19:28:06.956352 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f7466fddb-kzt5z_calico-system(d41669ea-f411-4baa-9026-f6667bc038e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f7466fddb-kzt5z_calico-system(d41669ea-f411-4baa-9026-f6667bc038e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" podUID="d41669ea-f411-4baa-9026-f6667bc038e5" Apr 13 19:28:06.994309 containerd[1736]: time="2026-04-13T19:28:06.994259097Z" level=info msg="CreateContainer within sandbox \"4e8e24904e2cf6d4d7cfd4b88fa7afcbb8a319bea025be683d59f969ecb635bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d\"" Apr 13 19:28:06.998345 containerd[1736]: time="2026-04-13T19:28:06.998308207Z" level=info msg="StartContainer for \"1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d\"" Apr 13 19:28:07.027266 systemd[1]: Started cri-containerd-1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d.scope - libcontainer container 1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d. Apr 13 19:28:07.101922 containerd[1736]: time="2026-04-13T19:28:07.101679622Z" level=info msg="StartContainer for \"1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d\" returns successfully" Apr 13 19:28:07.134262 containerd[1736]: time="2026-04-13T19:28:07.134216979Z" level=error msg="Failed to destroy network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.137952 containerd[1736]: time="2026-04-13T19:28:07.135293256Z" level=error msg="encountered an error cleaning up failed sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.137952 containerd[1736]: time="2026-04-13T19:28:07.135348696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j6g9p,Uid:e05ad226-2b3e-400d-9a42-c9477b96a890,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.138341 kubelet[3245]: E0413 19:28:07.137552 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.138341 kubelet[3245]: E0413 19:28:07.137610 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-j6g9p" Apr 13 19:28:07.138341 kubelet[3245]: E0413 19:28:07.137634 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-j6g9p" Apr 13 19:28:07.138452 kubelet[3245]: E0413 19:28:07.137681 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-j6g9p_kube-system(e05ad226-2b3e-400d-9a42-c9477b96a890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-j6g9p_kube-system(e05ad226-2b3e-400d-9a42-c9477b96a890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-j6g9p" podUID="e05ad226-2b3e-400d-9a42-c9477b96a890" Apr 13 19:28:07.160717 containerd[1736]: time="2026-04-13T19:28:07.159948753Z" level=error msg="Failed to destroy network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.160717 containerd[1736]: time="2026-04-13T19:28:07.160321992Z" level=error msg="encountered an error cleaning up failed sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.160717 containerd[1736]: time="2026-04-13T19:28:07.160373912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d898cc79b-fl9tk,Uid:ef55c60c-7526-45c5-9f34-71ce69563261,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.161434 kubelet[3245]: E0413 19:28:07.161044 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.161434 kubelet[3245]: E0413 19:28:07.161098 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:07.161434 kubelet[3245]: E0413 19:28:07.161116 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d898cc79b-fl9tk" Apr 13 19:28:07.161563 kubelet[3245]: E0413 19:28:07.161177 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d898cc79b-fl9tk_calico-system(ef55c60c-7526-45c5-9f34-71ce69563261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d898cc79b-fl9tk_calico-system(ef55c60c-7526-45c5-9f34-71ce69563261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d898cc79b-fl9tk" podUID="ef55c60c-7526-45c5-9f34-71ce69563261" Apr 13 19:28:07.171034 containerd[1736]: time="2026-04-13T19:28:07.170930085Z" level=error msg="Failed to destroy network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.171725 containerd[1736]: time="2026-04-13T19:28:07.171637483Z" level=error msg="encountered an error cleaning up failed sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.171929 containerd[1736]: time="2026-04-13T19:28:07.171878122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-td4q9,Uid:12affd5b-ffcb-44a7-8551-5d3b9b496800,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.172362 kubelet[3245]: E0413 19:28:07.172322 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.172615 kubelet[3245]: E0413 19:28:07.172492 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-774669dd44-td4q9" Apr 13 19:28:07.172615 kubelet[3245]: E0413 19:28:07.172523 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-774669dd44-td4q9" Apr 13 19:28:07.172831 kubelet[3245]: E0413 19:28:07.172748 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-774669dd44-td4q9_calico-system(12affd5b-ffcb-44a7-8551-5d3b9b496800)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-774669dd44-td4q9_calico-system(12affd5b-ffcb-44a7-8551-5d3b9b496800)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-774669dd44-td4q9" podUID="12affd5b-ffcb-44a7-8551-5d3b9b496800" Apr 13 19:28:07.201298 containerd[1736]: time="2026-04-13T19:28:07.201004448Z" level=error msg="Failed to destroy network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.202189 containerd[1736]: time="2026-04-13T19:28:07.201800166Z" level=error msg="Failed to destroy network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.204850 containerd[1736]: time="2026-04-13T19:28:07.204241319Z" level=error msg="encountered an error cleaning up failed sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.204850 containerd[1736]: time="2026-04-13T19:28:07.204312519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-p57hc,Uid:67679312-cc73-499f-abbf-6475bd30ecd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.205090 containerd[1736]: time="2026-04-13T19:28:07.204931918Z" level=error msg="encountered an error cleaning up failed sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.205090 containerd[1736]: time="2026-04-13T19:28:07.205015237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-6hkg4,Uid:d9eae39d-8433-4dd2-b30d-cd79e85825ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.207646 kubelet[3245]: E0413 19:28:07.206019 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.207646 kubelet[3245]: E0413 19:28:07.206079 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-774669dd44-6hkg4" Apr 13 19:28:07.207646 kubelet[3245]: E0413 19:28:07.206128 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-774669dd44-6hkg4" Apr 13 19:28:07.207788 kubelet[3245]: E0413 19:28:07.206180 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-774669dd44-6hkg4_calico-system(d9eae39d-8433-4dd2-b30d-cd79e85825ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-774669dd44-6hkg4_calico-system(d9eae39d-8433-4dd2-b30d-cd79e85825ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-774669dd44-6hkg4" podUID="d9eae39d-8433-4dd2-b30d-cd79e85825ad" Apr 13 19:28:07.207788 kubelet[3245]: E0413 19:28:07.206019 3245 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:28:07.207788 kubelet[3245]: E0413 19:28:07.206424 3245 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:07.207887 kubelet[3245]: E0413 19:28:07.206441 3245 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-p57hc" Apr 13 19:28:07.207887 kubelet[3245]: E0413 19:28:07.206472 3245 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-p57hc_calico-system(67679312-cc73-499f-abbf-6475bd30ecd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-p57hc_calico-system(67679312-cc73-499f-abbf-6475bd30ecd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-p57hc" podUID="67679312-cc73-499f-abbf-6475bd30ecd4" Apr 13 19:28:07.705624 kubelet[3245]: I0413 19:28:07.705501 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:07.706696 containerd[1736]: time="2026-04-13T19:28:07.706272594Z" level=info msg="StopPodSandbox for \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\"" Apr 13 19:28:07.706696 containerd[1736]: time="2026-04-13T19:28:07.706462714Z" level=info msg="Ensure that sandbox 9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3 in task-service has been cleanup successfully" Apr 13 19:28:07.708976 kubelet[3245]: I0413 19:28:07.708937 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:07.709251 containerd[1736]: time="2026-04-13T19:28:07.709229267Z" level=info msg="StopPodSandbox for \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\"" Apr 13 19:28:07.709703 containerd[1736]: time="2026-04-13T19:28:07.709684825Z" level=info msg="Ensure that sandbox afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39 in task-service has been cleanup successfully" Apr 13 19:28:07.712227 kubelet[3245]: I0413 19:28:07.712203 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:07.713994 containerd[1736]: time="2026-04-13T19:28:07.713951415Z" level=info msg="StopPodSandbox for \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\"" Apr 13 19:28:07.716003 containerd[1736]: time="2026-04-13T19:28:07.714519973Z" level=info msg="Ensure that sandbox bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156 in task-service has been cleanup successfully" Apr 13 19:28:07.720485 kubelet[3245]: I0413 19:28:07.720465 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:07.724191 containerd[1736]: time="2026-04-13T19:28:07.724157788Z" level=info msg="StopPodSandbox for \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\"" Apr 13 19:28:07.724341 containerd[1736]: time="2026-04-13T19:28:07.724322908Z" level=info msg="Ensure that sandbox 17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494 in task-service has been cleanup successfully" Apr 13 19:28:07.725054 kubelet[3245]: I0413 19:28:07.724989 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:07.726354 containerd[1736]: time="2026-04-13T19:28:07.726227183Z" level=info msg="StopPodSandbox for \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\"" Apr 13 19:28:07.727171 containerd[1736]: time="2026-04-13T19:28:07.726986421Z" level=info msg="Ensure that sandbox 85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c in task-service has been cleanup successfully" Apr 13 19:28:07.727976 kubelet[3245]: I0413 19:28:07.727954 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:07.732133 containerd[1736]: time="2026-04-13T19:28:07.730491292Z" level=info msg="StopPodSandbox for \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\"" Apr 13 19:28:07.732133 containerd[1736]: time="2026-04-13T19:28:07.731925169Z" level=info msg="Ensure that sandbox ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192 in task-service has been cleanup successfully" Apr 13 19:28:07.746857 kubelet[3245]: I0413 19:28:07.746819 3245 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:07.750783 containerd[1736]: time="2026-04-13T19:28:07.750750280Z" level=info msg="StopPodSandbox for \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\"" Apr 13 19:28:07.752145 containerd[1736]: time="2026-04-13T19:28:07.752120837Z" level=info msg="Ensure that sandbox 8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480 in task-service has been cleanup successfully" Apr 13 19:28:07.793156 kubelet[3245]: I0413 19:28:07.793081 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-wdp5z" podStartSLOduration=2.488889475 podStartE2EDuration="19.793069412s" podCreationTimestamp="2026-04-13 19:27:48 +0000 UTC" firstStartedPulling="2026-04-13 19:27:49.40091554 +0000 UTC m=+22.930207337" lastFinishedPulling="2026-04-13 19:28:06.705095477 +0000 UTC m=+40.234387274" observedRunningTime="2026-04-13 19:28:07.791892455 +0000 UTC m=+41.321184252" watchObservedRunningTime="2026-04-13 19:28:07.793069412 +0000 UTC m=+41.322361209" Apr 13 19:28:07.838279 systemd[1]: run-containerd-runc-k8s.io-1753ef764176dc30fc50d6007512f517545126f8916d793361674a8f39692e4d-runc.ryr1Nh.mount: Deactivated successfully. Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.923 [INFO][4443] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.923 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" iface="eth0" netns="/var/run/netns/cni-bc7ca1fc-2356-cb19-853a-f42946417b3f" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.924 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" iface="eth0" netns="/var/run/netns/cni-bc7ca1fc-2356-cb19-853a-f42946417b3f" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.924 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" iface="eth0" netns="/var/run/netns/cni-bc7ca1fc-2356-cb19-853a-f42946417b3f" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.924 [INFO][4443] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:07.925 [INFO][4443] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.028 [INFO][4551] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.030 [INFO][4551] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.030 [INFO][4551] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.043 [WARNING][4551] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.043 [INFO][4551] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.044 [INFO][4551] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.056005 containerd[1736]: 2026-04-13 19:28:08.052 [INFO][4443] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:08.058161 containerd[1736]: time="2026-04-13T19:28:08.057707534Z" level=info msg="TearDown network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" successfully" Apr 13 19:28:08.058161 containerd[1736]: time="2026-04-13T19:28:08.057737094Z" level=info msg="StopPodSandbox for \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" returns successfully" Apr 13 19:28:08.059277 systemd[1]: run-netns-cni\x2dbc7ca1fc\x2d2356\x2dcb19\x2d853a\x2df42946417b3f.mount: Deactivated successfully. Apr 13 19:28:08.069045 containerd[1736]: time="2026-04-13T19:28:08.068719266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-td4q9,Uid:12affd5b-ffcb-44a7-8551-5d3b9b496800,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.965 [INFO][4499] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.968 [INFO][4499] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" iface="eth0" netns="/var/run/netns/cni-ae419fab-bdc4-f4a7-5f23-aa3342415f0f" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.968 [INFO][4499] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" iface="eth0" netns="/var/run/netns/cni-ae419fab-bdc4-f4a7-5f23-aa3342415f0f" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.968 [INFO][4499] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" iface="eth0" netns="/var/run/netns/cni-ae419fab-bdc4-f4a7-5f23-aa3342415f0f" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.968 [INFO][4499] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:07.968 [INFO][4499] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.030 [INFO][4573] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.032 [INFO][4573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.044 [INFO][4573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.068 [WARNING][4573] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.069 [INFO][4573] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.072 [INFO][4573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.084559 containerd[1736]: 2026-04-13 19:28:08.078 [INFO][4499] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:08.087643 containerd[1736]: time="2026-04-13T19:28:08.086007862Z" level=info msg="TearDown network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" successfully" Apr 13 19:28:08.087643 containerd[1736]: time="2026-04-13T19:28:08.086040862Z" level=info msg="StopPodSandbox for \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" returns successfully" Apr 13 19:28:08.088851 systemd[1]: run-netns-cni\x2dae419fab\x2dbdc4\x2df4a7\x2d5f23\x2daa3342415f0f.mount: Deactivated successfully. Apr 13 19:28:08.093427 containerd[1736]: time="2026-04-13T19:28:08.093287043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-6hkg4,Uid:d9eae39d-8433-4dd2-b30d-cd79e85825ad,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.932 [INFO][4486] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.933 [INFO][4486] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" iface="eth0" netns="/var/run/netns/cni-3322d0dc-45b9-3314-5125-b60976e17f6d" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.933 [INFO][4486] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" iface="eth0" netns="/var/run/netns/cni-3322d0dc-45b9-3314-5125-b60976e17f6d" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.934 [INFO][4486] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" iface="eth0" netns="/var/run/netns/cni-3322d0dc-45b9-3314-5125-b60976e17f6d" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.934 [INFO][4486] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:07.934 [INFO][4486] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.045 [INFO][4556] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.045 [INFO][4556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.073 [INFO][4556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.095 [WARNING][4556] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.095 [INFO][4556] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.096 [INFO][4556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.099879 containerd[1736]: 2026-04-13 19:28:08.098 [INFO][4486] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:08.101401 containerd[1736]: time="2026-04-13T19:28:08.101372863Z" level=info msg="TearDown network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" successfully" Apr 13 19:28:08.101669 containerd[1736]: time="2026-04-13T19:28:08.101515182Z" level=info msg="StopPodSandbox for \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" returns successfully" Apr 13 19:28:08.108738 containerd[1736]: time="2026-04-13T19:28:08.108628764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dlfq,Uid:4730c9ea-c5c7-441b-b9a2-363112328d61,Namespace:kube-system,Attempt:1,}" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.934 [INFO][4450] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.934 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" iface="eth0" netns="/var/run/netns/cni-b52fd1d7-bbee-b0f6-9bd0-7f8b6320ca9d" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.935 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" iface="eth0" netns="/var/run/netns/cni-b52fd1d7-bbee-b0f6-9bd0-7f8b6320ca9d" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.941 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" iface="eth0" netns="/var/run/netns/cni-b52fd1d7-bbee-b0f6-9bd0-7f8b6320ca9d" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.941 [INFO][4450] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:07.941 [INFO][4450] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.080 [INFO][4563] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.080 [INFO][4563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.096 [INFO][4563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.108 [WARNING][4563] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.108 [INFO][4563] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.109 [INFO][4563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.113868 containerd[1736]: 2026-04-13 19:28:08.112 [INFO][4450] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:08.114279 containerd[1736]: time="2026-04-13T19:28:08.113968550Z" level=info msg="TearDown network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" successfully" Apr 13 19:28:08.114279 containerd[1736]: time="2026-04-13T19:28:08.113994430Z" level=info msg="StopPodSandbox for \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" returns successfully" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.939 [INFO][4476] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.942 [INFO][4476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" iface="eth0" netns="/var/run/netns/cni-4290e3ab-5a7f-06dc-22ab-8cae921cad57" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.944 [INFO][4476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" iface="eth0" netns="/var/run/netns/cni-4290e3ab-5a7f-06dc-22ab-8cae921cad57" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.944 [INFO][4476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" iface="eth0" netns="/var/run/netns/cni-4290e3ab-5a7f-06dc-22ab-8cae921cad57" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.944 [INFO][4476] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:07.944 [INFO][4476] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.083 [INFO][4562] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.083 [INFO][4562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.109 [INFO][4562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.125 [WARNING][4562] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.125 [INFO][4562] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.127 [INFO][4562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.140057 containerd[1736]: 2026-04-13 19:28:08.131 [INFO][4476] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:08.141353 containerd[1736]: time="2026-04-13T19:28:08.141221441Z" level=info msg="TearDown network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" successfully" Apr 13 19:28:08.141353 containerd[1736]: time="2026-04-13T19:28:08.141254121Z" level=info msg="StopPodSandbox for \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" returns successfully" Apr 13 19:28:08.149830 containerd[1736]: time="2026-04-13T19:28:08.149564259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j6g9p,Uid:e05ad226-2b3e-400d-9a42-c9477b96a890,Namespace:kube-system,Attempt:1,}" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.963 [INFO][4511] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.963 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" iface="eth0" netns="/var/run/netns/cni-985c0940-933d-eb7a-afb8-8169faf7afcf" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.964 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" iface="eth0" netns="/var/run/netns/cni-985c0940-933d-eb7a-afb8-8169faf7afcf" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.967 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" iface="eth0" netns="/var/run/netns/cni-985c0940-933d-eb7a-afb8-8169faf7afcf" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.967 [INFO][4511] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:07.967 [INFO][4511] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.089 [INFO][4571] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.090 [INFO][4571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.127 [INFO][4571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.144 [WARNING][4571] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.145 [INFO][4571] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.147 [INFO][4571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.156544 containerd[1736]: 2026-04-13 19:28:08.153 [INFO][4511] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:08.157835 containerd[1736]: time="2026-04-13T19:28:08.157714038Z" level=info msg="TearDown network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" successfully" Apr 13 19:28:08.157835 containerd[1736]: time="2026-04-13T19:28:08.157741438Z" level=info msg="StopPodSandbox for \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" returns successfully" Apr 13 19:28:08.165526 containerd[1736]: time="2026-04-13T19:28:08.165275819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7466fddb-kzt5z,Uid:d41669ea-f411-4baa-9026-f6667bc038e5,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" iface="eth0" netns="/var/run/netns/cni-6cc0c691-5a1f-cb2c-32f5-7e9cc5db61bb" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" iface="eth0" netns="/var/run/netns/cni-6cc0c691-5a1f-cb2c-32f5-7e9cc5db61bb" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" iface="eth0" netns="/var/run/netns/cni-6cc0c691-5a1f-cb2c-32f5-7e9cc5db61bb" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:07.974 [INFO][4492] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.093 [INFO][4579] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.094 [INFO][4579] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.147 [INFO][4579] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.166 [WARNING][4579] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.167 [INFO][4579] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.168 [INFO][4579] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.173659 containerd[1736]: 2026-04-13 19:28:08.172 [INFO][4492] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:08.175282 containerd[1736]: time="2026-04-13T19:28:08.173760797Z" level=info msg="TearDown network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" successfully" Apr 13 19:28:08.175282 containerd[1736]: time="2026-04-13T19:28:08.173784677Z" level=info msg="StopPodSandbox for \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" returns successfully" Apr 13 19:28:08.185611 containerd[1736]: time="2026-04-13T19:28:08.185379488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-p57hc,Uid:67679312-cc73-499f-abbf-6475bd30ecd4,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:08.241090 kubelet[3245]: I0413 19:28:08.240526 3245 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-backend-key-pair\") pod \"ef55c60c-7526-45c5-9f34-71ce69563261\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " Apr 13 19:28:08.241090 kubelet[3245]: I0413 19:28:08.240565 3245 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/ef55c60c-7526-45c5-9f34-71ce69563261-kube-api-access-gtsgd\" (UniqueName: \"kubernetes.io/projected/ef55c60c-7526-45c5-9f34-71ce69563261-kube-api-access-gtsgd\") pod \"ef55c60c-7526-45c5-9f34-71ce69563261\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " Apr 13 19:28:08.247393 kubelet[3245]: I0413 19:28:08.247358 3245 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-backend-key-pair" pod "ef55c60c-7526-45c5-9f34-71ce69563261" (UID: "ef55c60c-7526-45c5-9f34-71ce69563261"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:28:08.247485 kubelet[3245]: I0413 19:28:08.247429 3245 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-nginx-config\") pod \"ef55c60c-7526-45c5-9f34-71ce69563261\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " Apr 13 19:28:08.247485 kubelet[3245]: I0413 19:28:08.247464 3245 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-ca-bundle\") pod \"ef55c60c-7526-45c5-9f34-71ce69563261\" (UID: \"ef55c60c-7526-45c5-9f34-71ce69563261\") " Apr 13 19:28:08.247917 kubelet[3245]: I0413 19:28:08.247542 3245 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-backend-key-pair\") on node \"ci-4081.3.7-a-e37b9c2d0c\" DevicePath \"\"" Apr 13 19:28:08.247917 kubelet[3245]: I0413 19:28:08.247768 3245 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef55c60c-7526-45c5-9f34-71ce69563261-kube-api-access-gtsgd" pod "ef55c60c-7526-45c5-9f34-71ce69563261" (UID: "ef55c60c-7526-45c5-9f34-71ce69563261"). InnerVolumeSpecName "kube-api-access-gtsgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:28:08.247917 kubelet[3245]: I0413 19:28:08.247860 3245 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-ca-bundle" pod "ef55c60c-7526-45c5-9f34-71ce69563261" (UID: "ef55c60c-7526-45c5-9f34-71ce69563261"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:28:08.248436 kubelet[3245]: I0413 19:28:08.248373 3245 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-nginx-config" pod "ef55c60c-7526-45c5-9f34-71ce69563261" (UID: "ef55c60c-7526-45c5-9f34-71ce69563261"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:28:08.330747 systemd-networkd[1363]: cali5b8fc247ce7: Link UP Apr 13 19:28:08.331444 systemd-networkd[1363]: cali5b8fc247ce7: Gained carrier Apr 13 19:28:08.348625 kubelet[3245]: I0413 19:28:08.347816 3245 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtsgd\" (UniqueName: \"kubernetes.io/projected/ef55c60c-7526-45c5-9f34-71ce69563261-kube-api-access-gtsgd\") on node \"ci-4081.3.7-a-e37b9c2d0c\" DevicePath \"\"" Apr 13 19:28:08.348625 kubelet[3245]: I0413 19:28:08.347846 3245 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-nginx-config\") on node \"ci-4081.3.7-a-e37b9c2d0c\" DevicePath \"\"" Apr 13 19:28:08.348625 kubelet[3245]: I0413 19:28:08.347855 3245 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef55c60c-7526-45c5-9f34-71ce69563261-whisker-ca-bundle\") on node \"ci-4081.3.7-a-e37b9c2d0c\" DevicePath \"\"" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.156 [ERROR][4599] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.174 [INFO][4599] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0 calico-apiserver-774669dd44- calico-system 12affd5b-ffcb-44a7-8551-5d3b9b496800 881 0 2026-04-13 19:27:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:774669dd44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c calico-apiserver-774669dd44-td4q9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5b8fc247ce7 [] [] }} ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.174 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.205 [INFO][4613] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" HandleID="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.221 [INFO][4613] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" HandleID="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"calico-apiserver-774669dd44-td4q9", "timestamp":"2026-04-13 19:28:08.205651836 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a71e0)} Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.221 [INFO][4613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.221 [INFO][4613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.221 [INFO][4613] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.223 [INFO][4613] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.226 [INFO][4613] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.233 [INFO][4613] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.234 [INFO][4613] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.236 [INFO][4613] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.236 [INFO][4613] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.237 [INFO][4613] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8 Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.248 [INFO][4613] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.255 [INFO][4613] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.129/26] block=192.168.26.128/26 handle="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.255 [INFO][4613] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.129/26] handle="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.255 [INFO][4613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.372626 containerd[1736]: 2026-04-13 19:28:08.255 [INFO][4613] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.129/26] IPv6=[] ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" HandleID="k8s-pod-network.4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.258 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"12affd5b-ffcb-44a7-8551-5d3b9b496800", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"calico-apiserver-774669dd44-td4q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b8fc247ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.258 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.129/32] ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.258 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b8fc247ce7 ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.331 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.331 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"12affd5b-ffcb-44a7-8551-5d3b9b496800", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8", Pod:"calico-apiserver-774669dd44-td4q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b8fc247ce7", MAC:"f2:21:f4:ae:4d:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.373167 containerd[1736]: 2026-04-13 19:28:08.359 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8" Namespace="calico-system" Pod="calico-apiserver-774669dd44-td4q9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:08.429572 systemd-networkd[1363]: calif4cb7babb14: Link UP Apr 13 19:28:08.431438 systemd-networkd[1363]: calif4cb7babb14: Gained carrier Apr 13 19:28:08.451228 containerd[1736]: time="2026-04-13T19:28:08.449960970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:08.451228 containerd[1736]: time="2026-04-13T19:28:08.450010250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:08.451228 containerd[1736]: time="2026-04-13T19:28:08.450033450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.451228 containerd[1736]: time="2026-04-13T19:28:08.450109970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.218 [ERROR][4617] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.237 [INFO][4617] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0 calico-apiserver-774669dd44- calico-system d9eae39d-8433-4dd2-b30d-cd79e85825ad 886 0 2026-04-13 19:27:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:774669dd44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c calico-apiserver-774669dd44-6hkg4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif4cb7babb14 [] [] }} ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.237 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.281 [INFO][4634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" HandleID="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.297 [INFO][4634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" HandleID="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"calico-apiserver-774669dd44-6hkg4", "timestamp":"2026-04-13 19:28:08.2819538 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000329080)} Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.298 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.298 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.298 [INFO][4634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.324 [INFO][4634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.346 [INFO][4634] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.365 [INFO][4634] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.369 [INFO][4634] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.374 [INFO][4634] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.374 [INFO][4634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.375 [INFO][4634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.385 [INFO][4634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.406 [INFO][4634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.130/26] block=192.168.26.128/26 handle="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.407 [INFO][4634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.130/26] handle="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.407 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.468393 containerd[1736]: 2026-04-13 19:28:08.407 [INFO][4634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.130/26] IPv6=[] ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" HandleID="k8s-pod-network.0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.415 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"d9eae39d-8433-4dd2-b30d-cd79e85825ad", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"calico-apiserver-774669dd44-6hkg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif4cb7babb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.416 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.130/32] ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.416 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4cb7babb14 ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.434 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.435 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"d9eae39d-8433-4dd2-b30d-cd79e85825ad", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba", Pod:"calico-apiserver-774669dd44-6hkg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif4cb7babb14", MAC:"92:5c:3a:c9:da:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.468989 containerd[1736]: 2026-04-13 19:28:08.457 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba" Namespace="calico-system" Pod="calico-apiserver-774669dd44-6hkg4" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:08.495007 systemd[1]: Started cri-containerd-4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8.scope - libcontainer container 4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8. Apr 13 19:28:08.551309 systemd[1]: run-netns-cni\x2d6cc0c691\x2d5a1f\x2dcb2c\x2d32f5\x2d7e9cc5db61bb.mount: Deactivated successfully. Apr 13 19:28:08.553004 systemd[1]: run-netns-cni\x2db52fd1d7\x2dbbee\x2db0f6\x2d9bd0\x2d7f8b6320ca9d.mount: Deactivated successfully. Apr 13 19:28:08.553060 systemd[1]: run-netns-cni\x2d4290e3ab\x2d5a7f\x2d06dc\x2d22ab\x2d8cae921cad57.mount: Deactivated successfully. Apr 13 19:28:08.553104 systemd[1]: run-netns-cni\x2d985c0940\x2d933d\x2deb7a\x2dafb8\x2d8169faf7afcf.mount: Deactivated successfully. Apr 13 19:28:08.553144 systemd[1]: run-netns-cni\x2d3322d0dc\x2d45b9\x2d3314\x2d5125\x2db60976e17f6d.mount: Deactivated successfully. Apr 13 19:28:08.553190 systemd[1]: var-lib-kubelet-pods-ef55c60c\x2d7526\x2d45c5\x2d9f34\x2d71ce69563261-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtsgd.mount: Deactivated successfully. Apr 13 19:28:08.553243 systemd[1]: var-lib-kubelet-pods-ef55c60c\x2d7526\x2d45c5\x2d9f34\x2d71ce69563261-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:28:08.569597 containerd[1736]: time="2026-04-13T19:28:08.566746471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:08.569597 containerd[1736]: time="2026-04-13T19:28:08.566800111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:08.569597 containerd[1736]: time="2026-04-13T19:28:08.566810551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.569597 containerd[1736]: time="2026-04-13T19:28:08.566873711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.601313 systemd-networkd[1363]: calib4c56f97796: Link UP Apr 13 19:28:08.605767 systemd-networkd[1363]: calib4c56f97796: Gained carrier Apr 13 19:28:08.606234 systemd[1]: Started cri-containerd-0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba.scope - libcontainer container 0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba. Apr 13 19:28:08.619002 systemd[1]: Removed slice kubepods-besteffort-podef55c60c_7526_45c5_9f34_71ce69563261.slice - libcontainer container kubepods-besteffort-podef55c60c_7526_45c5_9f34_71ce69563261.slice. Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.294 [ERROR][4638] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.309 [INFO][4638] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0 coredns-7d764666f9- kube-system 4730c9ea-c5c7-441b-b9a2-363112328d61 884 0 2026-04-13 19:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c coredns-7d764666f9-8dlfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4c56f97796 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.309 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.436 [INFO][4654] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" HandleID="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.466 [INFO][4654] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" HandleID="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"coredns-7d764666f9-8dlfq", "timestamp":"2026-04-13 19:28:08.436542165 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000572dc0)} Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.466 [INFO][4654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.466 [INFO][4654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.466 [INFO][4654] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.472 [INFO][4654] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.477 [INFO][4654] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.501 [INFO][4654] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.505 [INFO][4654] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.509 [INFO][4654] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.509 [INFO][4654] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.512 [INFO][4654] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81 Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.521 [INFO][4654] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.548 [INFO][4654] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.131/26] block=192.168.26.128/26 handle="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.548 [INFO][4654] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.131/26] handle="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.548 [INFO][4654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.649943 containerd[1736]: 2026-04-13 19:28:08.548 [INFO][4654] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.131/26] IPv6=[] ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" HandleID="k8s-pod-network.74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.650544 containerd[1736]: 2026-04-13 19:28:08.563 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4730c9ea-c5c7-441b-b9a2-363112328d61", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"coredns-7d764666f9-8dlfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c56f97796", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.650544 containerd[1736]: 2026-04-13 19:28:08.563 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.131/32] ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.650544 containerd[1736]: 2026-04-13 19:28:08.563 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4c56f97796 ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.650544 containerd[1736]: 2026-04-13 19:28:08.617 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.650544 containerd[1736]: 2026-04-13 19:28:08.618 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4730c9ea-c5c7-441b-b9a2-363112328d61", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81", Pod:"coredns-7d764666f9-8dlfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c56f97796", MAC:"9a:0d:63:b2:85:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.650737 containerd[1736]: 2026-04-13 19:28:08.645 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81" Namespace="kube-system" Pod="coredns-7d764666f9-8dlfq" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:08.678577 containerd[1736]: time="2026-04-13T19:28:08.678527265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-td4q9,Uid:12affd5b-ffcb-44a7-8551-5d3b9b496800,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8\"" Apr 13 19:28:08.682197 containerd[1736]: time="2026-04-13T19:28:08.682150696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:28:08.727527 systemd-networkd[1363]: calice81d6e2e51: Link UP Apr 13 19:28:08.733444 systemd-networkd[1363]: calice81d6e2e51: Gained carrier Apr 13 19:28:08.748389 containerd[1736]: time="2026-04-13T19:28:08.746700971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:08.748389 containerd[1736]: time="2026-04-13T19:28:08.746758530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:08.748389 containerd[1736]: time="2026-04-13T19:28:08.746773690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.748389 containerd[1736]: time="2026-04-13T19:28:08.746858890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.782536 containerd[1736]: time="2026-04-13T19:28:08.782026160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774669dd44-6hkg4,Uid:d9eae39d-8433-4dd2-b30d-cd79e85825ad,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba\"" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.393 [ERROR][4658] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.414 [INFO][4658] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0 coredns-7d764666f9- kube-system e05ad226-2b3e-400d-9a42-c9477b96a890 883 0 2026-04-13 19:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c coredns-7d764666f9-j6g9p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice81d6e2e51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.415 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.541 [INFO][4710] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" HandleID="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.628 [INFO][4710] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" HandleID="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005fc080), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"coredns-7d764666f9-j6g9p", "timestamp":"2026-04-13 19:28:08.541965975 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000618000)} Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.628 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.628 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.628 [INFO][4710] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.631 [INFO][4710] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.663 [INFO][4710] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.679 [INFO][4710] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.685 [INFO][4710] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.693 [INFO][4710] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.693 [INFO][4710] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.694 [INFO][4710] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.701 [INFO][4710] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.716 [INFO][4710] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.132/26] block=192.168.26.128/26 handle="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.716 [INFO][4710] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.132/26] handle="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.716 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.793290 containerd[1736]: 2026-04-13 19:28:08.716 [INFO][4710] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.132/26] IPv6=[] ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" HandleID="k8s-pod-network.491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793858 containerd[1736]: 2026-04-13 19:28:08.720 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e05ad226-2b3e-400d-9a42-c9477b96a890", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"coredns-7d764666f9-j6g9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice81d6e2e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.793858 containerd[1736]: 2026-04-13 19:28:08.721 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.132/32] ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793858 containerd[1736]: 2026-04-13 19:28:08.721 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice81d6e2e51 ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793858 containerd[1736]: 2026-04-13 19:28:08.735 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.793858 containerd[1736]: 2026-04-13 19:28:08.742 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e05ad226-2b3e-400d-9a42-c9477b96a890", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc", Pod:"coredns-7d764666f9-j6g9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice81d6e2e51", MAC:"12:58:b3:be:f5:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.794028 containerd[1736]: 2026-04-13 19:28:08.771 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc" Namespace="kube-system" Pod="coredns-7d764666f9-j6g9p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:08.814920 systemd[1]: Started cri-containerd-74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81.scope - libcontainer container 74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81. Apr 13 19:28:08.845069 containerd[1736]: time="2026-04-13T19:28:08.844142801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:08.845069 containerd[1736]: time="2026-04-13T19:28:08.844184321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:08.845069 containerd[1736]: time="2026-04-13T19:28:08.844194561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.845069 containerd[1736]: time="2026-04-13T19:28:08.844253641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:08.880508 systemd-networkd[1363]: calia2caf85e152: Link UP Apr 13 19:28:08.883409 systemd-networkd[1363]: calia2caf85e152: Gained carrier Apr 13 19:28:08.928615 systemd[1]: Started cri-containerd-491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc.scope - libcontainer container 491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc. Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.441 [ERROR][4668] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.475 [INFO][4668] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0 calico-kube-controllers-7f7466fddb- calico-system d41669ea-f411-4baa-9026-f6667bc038e5 885 0 2026-04-13 19:27:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f7466fddb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c calico-kube-controllers-7f7466fddb-kzt5z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2caf85e152 [] [] }} ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.475 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.644 [INFO][4745] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" HandleID="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.672 [INFO][4745] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" HandleID="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"calico-kube-controllers-7f7466fddb-kzt5z", "timestamp":"2026-04-13 19:28:08.644426392 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400011cc60)} Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.672 [INFO][4745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.719 [INFO][4745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.719 [INFO][4745] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.732 [INFO][4745] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.759 [INFO][4745] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.796 [INFO][4745] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.802 [INFO][4745] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.806 [INFO][4745] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.807 [INFO][4745] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.811 [INFO][4745] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653 Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.824 [INFO][4745] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.850 [INFO][4745] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.133/26] block=192.168.26.128/26 handle="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.868 [INFO][4745] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.133/26] handle="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.868 [INFO][4745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:08.933622 containerd[1736]: 2026-04-13 19:28:08.870 [INFO][4745] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.133/26] IPv6=[] ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" HandleID="k8s-pod-network.a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.875 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0", GenerateName:"calico-kube-controllers-7f7466fddb-", Namespace:"calico-system", SelfLink:"", UID:"d41669ea-f411-4baa-9026-f6667bc038e5", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7466fddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"calico-kube-controllers-7f7466fddb-kzt5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2caf85e152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.875 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.133/32] ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.875 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2caf85e152 ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.883 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.888 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0", GenerateName:"calico-kube-controllers-7f7466fddb-", Namespace:"calico-system", SelfLink:"", UID:"d41669ea-f411-4baa-9026-f6667bc038e5", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7466fddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653", Pod:"calico-kube-controllers-7f7466fddb-kzt5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2caf85e152", MAC:"96:5a:fc:9e:8d:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:08.934167 containerd[1736]: 2026-04-13 19:28:08.916 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653" Namespace="calico-system" Pod="calico-kube-controllers-7f7466fddb-kzt5z" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:08.959971 systemd[1]: Created slice kubepods-besteffort-pod686a2b36_222e_4e8a_906f_a1cad0b8d567.slice - libcontainer container kubepods-besteffort-pod686a2b36_222e_4e8a_906f_a1cad0b8d567.slice. Apr 13 19:28:08.971343 containerd[1736]: time="2026-04-13T19:28:08.970835357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dlfq,Uid:4730c9ea-c5c7-441b-b9a2-363112328d61,Namespace:kube-system,Attempt:1,} returns sandbox id \"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81\"" Apr 13 19:28:08.988798 containerd[1736]: time="2026-04-13T19:28:08.988743871Z" level=info msg="CreateContainer within sandbox \"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:28:09.046396 systemd-networkd[1363]: caliedf3d30b689: Link UP Apr 13 19:28:09.047800 systemd-networkd[1363]: caliedf3d30b689: Gained carrier Apr 13 19:28:09.059793 kubelet[3245]: I0413 19:28:09.059744 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcswm\" (UniqueName: \"kubernetes.io/projected/686a2b36-222e-4e8a-906f-a1cad0b8d567-kube-api-access-tcswm\") pod \"whisker-6ccc8ff85c-kzx6p\" (UID: \"686a2b36-222e-4e8a-906f-a1cad0b8d567\") " pod="calico-system/whisker-6ccc8ff85c-kzx6p" Apr 13 19:28:09.059793 kubelet[3245]: I0413 19:28:09.059788 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/686a2b36-222e-4e8a-906f-a1cad0b8d567-whisker-backend-key-pair\") pod \"whisker-6ccc8ff85c-kzx6p\" (UID: \"686a2b36-222e-4e8a-906f-a1cad0b8d567\") " pod="calico-system/whisker-6ccc8ff85c-kzx6p" Apr 13 19:28:09.060648 kubelet[3245]: I0413 19:28:09.059813 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/686a2b36-222e-4e8a-906f-a1cad0b8d567-nginx-config\") pod \"whisker-6ccc8ff85c-kzx6p\" (UID: \"686a2b36-222e-4e8a-906f-a1cad0b8d567\") " pod="calico-system/whisker-6ccc8ff85c-kzx6p" Apr 13 19:28:09.060648 kubelet[3245]: I0413 19:28:09.059834 3245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686a2b36-222e-4e8a-906f-a1cad0b8d567-whisker-ca-bundle\") pod \"whisker-6ccc8ff85c-kzx6p\" (UID: \"686a2b36-222e-4e8a-906f-a1cad0b8d567\") " pod="calico-system/whisker-6ccc8ff85c-kzx6p" Apr 13 19:28:09.061962 containerd[1736]: time="2026-04-13T19:28:09.061020926Z" level=info msg="CreateContainer within sandbox \"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e761012c3cc26edaf2c9c4bf0f038f4ba553ebeab5b17dc82c77f7a8a7127ad\"" Apr 13 19:28:09.072096 containerd[1736]: time="2026-04-13T19:28:09.071971898Z" level=info msg="StartContainer for \"6e761012c3cc26edaf2c9c4bf0f038f4ba553ebeab5b17dc82c77f7a8a7127ad\"" Apr 13 19:28:09.088353 containerd[1736]: time="2026-04-13T19:28:09.088155096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j6g9p,Uid:e05ad226-2b3e-400d-9a42-c9477b96a890,Namespace:kube-system,Attempt:1,} returns sandbox id \"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc\"" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.446 [ERROR][4682] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.480 [INFO][4682] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0 goldmane-9f7667bb8- calico-system 67679312-cc73-499f-abbf-6475bd30ecd4 887 0 2026-04-13 19:27:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c goldmane-9f7667bb8-p57hc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliedf3d30b689 [] [] }} ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.486 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.634 [INFO][4760] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" HandleID="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.686 [INFO][4760] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" HandleID="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"goldmane-9f7667bb8-p57hc", "timestamp":"2026-04-13 19:28:08.634196979 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000283600)} Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.686 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.869 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.869 [INFO][4760] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.891 [INFO][4760] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.899 [INFO][4760] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.920 [INFO][4760] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.926 [INFO][4760] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.952 [INFO][4760] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.952 [INFO][4760] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:08.983 [INFO][4760] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:09.007 [INFO][4760] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:09.027 [INFO][4760] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.134/26] block=192.168.26.128/26 handle="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:09.028 [INFO][4760] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.134/26] handle="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:09.028 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:09.101499 containerd[1736]: 2026-04-13 19:28:09.028 [INFO][4760] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.134/26] IPv6=[] ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" HandleID="k8s-pod-network.a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.035 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"67679312-cc73-499f-abbf-6475bd30ecd4", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"goldmane-9f7667bb8-p57hc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedf3d30b689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.036 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.134/32] ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.036 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedf3d30b689 ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.051 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.054 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"67679312-cc73-499f-abbf-6475bd30ecd4", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf", Pod:"goldmane-9f7667bb8-p57hc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedf3d30b689", MAC:"7e:ab:3b:4c:c9:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:09.102082 containerd[1736]: 2026-04-13 19:28:09.083 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf" Namespace="calico-system" Pod="goldmane-9f7667bb8-p57hc" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:09.105502 containerd[1736]: time="2026-04-13T19:28:09.104661534Z" level=info msg="CreateContainer within sandbox \"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:28:09.119302 containerd[1736]: time="2026-04-13T19:28:09.116990223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:09.119302 containerd[1736]: time="2026-04-13T19:28:09.117144742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:09.119302 containerd[1736]: time="2026-04-13T19:28:09.117163062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.119302 containerd[1736]: time="2026-04-13T19:28:09.117676261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.172737 systemd[1]: Started cri-containerd-6e761012c3cc26edaf2c9c4bf0f038f4ba553ebeab5b17dc82c77f7a8a7127ad.scope - libcontainer container 6e761012c3cc26edaf2c9c4bf0f038f4ba553ebeab5b17dc82c77f7a8a7127ad. Apr 13 19:28:09.190869 systemd[1]: Started cri-containerd-a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653.scope - libcontainer container a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653. Apr 13 19:28:09.195610 containerd[1736]: time="2026-04-13T19:28:09.194887623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:09.195610 containerd[1736]: time="2026-04-13T19:28:09.195037503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:09.195610 containerd[1736]: time="2026-04-13T19:28:09.195056343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.195610 containerd[1736]: time="2026-04-13T19:28:09.195147303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.218071 systemd[1]: Started cri-containerd-a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf.scope - libcontainer container a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf. Apr 13 19:28:09.222373 containerd[1736]: time="2026-04-13T19:28:09.222331153Z" level=info msg="CreateContainer within sandbox \"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dee43762cd6b5cd16e7809f98463321c2a23d5461c8f2541b3615a02bc391e66\"" Apr 13 19:28:09.230815 containerd[1736]: time="2026-04-13T19:28:09.230773971Z" level=info msg="StartContainer for \"dee43762cd6b5cd16e7809f98463321c2a23d5461c8f2541b3615a02bc391e66\"" Apr 13 19:28:09.272686 containerd[1736]: time="2026-04-13T19:28:09.272646024Z" level=info msg="StartContainer for \"6e761012c3cc26edaf2c9c4bf0f038f4ba553ebeab5b17dc82c77f7a8a7127ad\" returns successfully" Apr 13 19:28:09.274478 containerd[1736]: time="2026-04-13T19:28:09.274449179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccc8ff85c-kzx6p,Uid:686a2b36-222e-4e8a-906f-a1cad0b8d567,Namespace:calico-system,Attempt:0,}" Apr 13 19:28:09.310621 containerd[1736]: time="2026-04-13T19:28:09.310145368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-p57hc,Uid:67679312-cc73-499f-abbf-6475bd30ecd4,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf\"" Apr 13 19:28:09.320559 systemd[1]: Started cri-containerd-dee43762cd6b5cd16e7809f98463321c2a23d5461c8f2541b3615a02bc391e66.scope - libcontainer container dee43762cd6b5cd16e7809f98463321c2a23d5461c8f2541b3615a02bc391e66. Apr 13 19:28:09.346763 containerd[1736]: time="2026-04-13T19:28:09.346722594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7466fddb-kzt5z,Uid:d41669ea-f411-4baa-9026-f6667bc038e5,Namespace:calico-system,Attempt:1,} returns sandbox id \"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653\"" Apr 13 19:28:09.432806 containerd[1736]: time="2026-04-13T19:28:09.432687895Z" level=info msg="StartContainer for \"dee43762cd6b5cd16e7809f98463321c2a23d5461c8f2541b3615a02bc391e66\" returns successfully" Apr 13 19:28:09.544627 systemd[1]: run-containerd-runc-k8s.io-74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81-runc.4iAP11.mount: Deactivated successfully. Apr 13 19:28:09.548984 systemd-networkd[1363]: cali2e0162ef5e9: Link UP Apr 13 19:28:09.549120 systemd-networkd[1363]: cali2e0162ef5e9: Gained carrier Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.414 [ERROR][5177] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.437 [INFO][5177] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0 whisker-6ccc8ff85c- calico-system 686a2b36-222e-4e8a-906f-a1cad0b8d567 922 0 2026-04-13 19:28:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ccc8ff85c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c whisker-6ccc8ff85c-kzx6p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e0162ef5e9 [] [] }} ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.437 [INFO][5177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.473 [INFO][5205] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" HandleID="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.489 [INFO][5205] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" HandleID="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"whisker-6ccc8ff85c-kzx6p", "timestamp":"2026-04-13 19:28:09.473150032 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001862c0)} Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.489 [INFO][5205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.490 [INFO][5205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.490 [INFO][5205] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.492 [INFO][5205] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.501 [INFO][5205] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.506 [INFO][5205] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.508 [INFO][5205] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.510 [INFO][5205] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.510 [INFO][5205] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.512 [INFO][5205] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541 Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.521 [INFO][5205] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.536 [INFO][5205] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.135/26] block=192.168.26.128/26 handle="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.536 [INFO][5205] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.135/26] handle="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.536 [INFO][5205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:09.573727 containerd[1736]: 2026-04-13 19:28:09.536 [INFO][5205] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.135/26] IPv6=[] ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" HandleID="k8s-pod-network.567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.540 [INFO][5177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0", GenerateName:"whisker-6ccc8ff85c-", Namespace:"calico-system", SelfLink:"", UID:"686a2b36-222e-4e8a-906f-a1cad0b8d567", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ccc8ff85c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"whisker-6ccc8ff85c-kzx6p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e0162ef5e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.540 [INFO][5177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.135/32] ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.540 [INFO][5177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e0162ef5e9 ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.547 [INFO][5177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.547 [INFO][5177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0", GenerateName:"whisker-6ccc8ff85c-", Namespace:"calico-system", SelfLink:"", UID:"686a2b36-222e-4e8a-906f-a1cad0b8d567", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ccc8ff85c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541", Pod:"whisker-6ccc8ff85c-kzx6p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e0162ef5e9", MAC:"ea:e6:b7:46:b3:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:09.574962 containerd[1736]: 2026-04-13 19:28:09.569 [INFO][5177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541" Namespace="calico-system" Pod="whisker-6ccc8ff85c-kzx6p" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--6ccc8ff85c--kzx6p-eth0" Apr 13 19:28:09.606680 containerd[1736]: time="2026-04-13T19:28:09.606513651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:09.606680 containerd[1736]: time="2026-04-13T19:28:09.606574651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:09.606847 containerd[1736]: time="2026-04-13T19:28:09.606710890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.606914 containerd[1736]: time="2026-04-13T19:28:09.606883570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:09.633770 systemd[1]: Started cri-containerd-567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541.scope - libcontainer container 567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541. Apr 13 19:28:09.665459 containerd[1736]: time="2026-04-13T19:28:09.665339301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccc8ff85c-kzx6p,Uid:686a2b36-222e-4e8a-906f-a1cad0b8d567,Namespace:calico-system,Attempt:0,} returns sandbox id \"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541\"" Apr 13 19:28:09.731728 systemd-networkd[1363]: cali5b8fc247ce7: Gained IPv6LL Apr 13 19:28:09.794631 kubelet[3245]: I0413 19:28:09.794548 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-j6g9p" podStartSLOduration=37.794535251 podStartE2EDuration="37.794535251s" podCreationTimestamp="2026-04-13 19:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:28:09.793727613 +0000 UTC m=+43.323019370" watchObservedRunningTime="2026-04-13 19:28:09.794535251 +0000 UTC m=+43.323827048" Apr 13 19:28:09.833466 kubelet[3245]: I0413 19:28:09.833405 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8dlfq" podStartSLOduration=37.833391791 podStartE2EDuration="37.833391791s" podCreationTimestamp="2026-04-13 19:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:28:09.81443152 +0000 UTC m=+43.343723317" watchObservedRunningTime="2026-04-13 19:28:09.833391791 +0000 UTC m=+43.362683588" Apr 13 19:28:09.923795 systemd-networkd[1363]: calia2caf85e152: Gained IPv6LL Apr 13 19:28:09.924043 systemd-networkd[1363]: calice81d6e2e51: Gained IPv6LL Apr 13 19:28:09.987729 systemd-networkd[1363]: calib4c56f97796: Gained IPv6LL Apr 13 19:28:10.051774 systemd-networkd[1363]: calif4cb7babb14: Gained IPv6LL Apr 13 19:28:10.616451 kubelet[3245]: I0413 19:28:10.615876 3245 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ef55c60c-7526-45c5-9f34-71ce69563261" path="/var/lib/kubelet/pods/ef55c60c-7526-45c5-9f34-71ce69563261/volumes" Apr 13 19:28:10.947936 systemd-networkd[1363]: caliedf3d30b689: Gained IPv6LL Apr 13 19:28:11.267827 systemd-networkd[1363]: cali2e0162ef5e9: Gained IPv6LL Apr 13 19:28:11.367614 containerd[1736]: time="2026-04-13T19:28:11.367547193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:11.371673 containerd[1736]: time="2026-04-13T19:28:11.371640903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:28:11.377522 containerd[1736]: time="2026-04-13T19:28:11.376432771Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:11.384573 containerd[1736]: time="2026-04-13T19:28:11.384521150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:11.385509 containerd[1736]: time="2026-04-13T19:28:11.385481188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.703023173s" Apr 13 19:28:11.385623 containerd[1736]: time="2026-04-13T19:28:11.385608307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:28:11.386558 containerd[1736]: time="2026-04-13T19:28:11.386529105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:28:11.395573 containerd[1736]: time="2026-04-13T19:28:11.395531802Z" level=info msg="CreateContainer within sandbox \"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:28:11.459446 containerd[1736]: time="2026-04-13T19:28:11.459395639Z" level=info msg="CreateContainer within sandbox \"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482\"" Apr 13 19:28:11.461147 containerd[1736]: time="2026-04-13T19:28:11.460049397Z" level=info msg="StartContainer for \"ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482\"" Apr 13 19:28:11.489790 systemd[1]: run-containerd-runc-k8s.io-ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482-runc.PteVBT.mount: Deactivated successfully. Apr 13 19:28:11.504788 systemd[1]: Started cri-containerd-ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482.scope - libcontainer container ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482. Apr 13 19:28:11.669943 containerd[1736]: time="2026-04-13T19:28:11.669828541Z" level=info msg="StartContainer for \"ec36f7da16b453374cf27308ad80a9f7868e1c31dccc9671a650d8cdadd48482\" returns successfully" Apr 13 19:28:11.789977 containerd[1736]: time="2026-04-13T19:28:11.789053477Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:11.806968 containerd[1736]: time="2026-04-13T19:28:11.806914391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:28:11.809573 containerd[1736]: time="2026-04-13T19:28:11.809533545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 422.96724ms" Apr 13 19:28:11.809739 containerd[1736]: time="2026-04-13T19:28:11.809720944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:28:11.812755 containerd[1736]: time="2026-04-13T19:28:11.812730776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:28:11.834621 kubelet[3245]: I0413 19:28:11.834547 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-774669dd44-td4q9" podStartSLOduration=24.128926315 podStartE2EDuration="26.834533601s" podCreationTimestamp="2026-04-13 19:27:45 +0000 UTC" firstStartedPulling="2026-04-13 19:28:08.680809859 +0000 UTC m=+42.210101656" lastFinishedPulling="2026-04-13 19:28:11.386417145 +0000 UTC m=+44.915708942" observedRunningTime="2026-04-13 19:28:11.834017202 +0000 UTC m=+45.363309039" watchObservedRunningTime="2026-04-13 19:28:11.834533601 +0000 UTC m=+45.363825358" Apr 13 19:28:11.860342 containerd[1736]: time="2026-04-13T19:28:11.859041978Z" level=info msg="CreateContainer within sandbox \"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:28:11.915354 containerd[1736]: time="2026-04-13T19:28:11.915302714Z" level=info msg="CreateContainer within sandbox \"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf343e8f7c2147a34297cecefb2e517b2a7efce820909210c0dcbfe1654ecfd1\"" Apr 13 19:28:11.916118 containerd[1736]: time="2026-04-13T19:28:11.916092392Z" level=info msg="StartContainer for \"cf343e8f7c2147a34297cecefb2e517b2a7efce820909210c0dcbfe1654ecfd1\"" Apr 13 19:28:11.944732 systemd[1]: Started cri-containerd-cf343e8f7c2147a34297cecefb2e517b2a7efce820909210c0dcbfe1654ecfd1.scope - libcontainer container cf343e8f7c2147a34297cecefb2e517b2a7efce820909210c0dcbfe1654ecfd1. Apr 13 19:28:11.987951 containerd[1736]: time="2026-04-13T19:28:11.987902929Z" level=info msg="StartContainer for \"cf343e8f7c2147a34297cecefb2e517b2a7efce820909210c0dcbfe1654ecfd1\" returns successfully" Apr 13 19:28:12.433722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747118640.mount: Deactivated successfully. Apr 13 19:28:12.804545 kubelet[3245]: I0413 19:28:12.804446 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:13.770316 kubelet[3245]: I0413 19:28:13.769984 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:13.801601 kubelet[3245]: I0413 19:28:13.799601 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-774669dd44-6hkg4" podStartSLOduration=25.777240464 podStartE2EDuration="28.799570622s" podCreationTimestamp="2026-04-13 19:27:45 +0000 UTC" firstStartedPulling="2026-04-13 19:28:08.789490181 +0000 UTC m=+42.318781978" lastFinishedPulling="2026-04-13 19:28:11.811820299 +0000 UTC m=+45.341112136" observedRunningTime="2026-04-13 19:28:12.827368225 +0000 UTC m=+46.356660022" watchObservedRunningTime="2026-04-13 19:28:13.799570622 +0000 UTC m=+47.328862499" Apr 13 19:28:13.808733 kubelet[3245]: I0413 19:28:13.807760 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:13.895633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272909322.mount: Deactivated successfully. Apr 13 19:28:14.500578 containerd[1736]: time="2026-04-13T19:28:14.500521872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:14.510431 containerd[1736]: time="2026-04-13T19:28:14.510376047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:28:14.510606 kernel: calico-node[5451]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:28:14.518560 containerd[1736]: time="2026-04-13T19:28:14.517008470Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:14.578304 containerd[1736]: time="2026-04-13T19:28:14.578259313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.765383938s" Apr 13 19:28:14.579258 containerd[1736]: time="2026-04-13T19:28:14.578489553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:14.579351 containerd[1736]: time="2026-04-13T19:28:14.578816872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:28:14.581334 containerd[1736]: time="2026-04-13T19:28:14.581302226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:28:14.598418 containerd[1736]: time="2026-04-13T19:28:14.598262102Z" level=info msg="CreateContainer within sandbox \"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:28:14.660531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406024099.mount: Deactivated successfully. Apr 13 19:28:14.675496 containerd[1736]: time="2026-04-13T19:28:14.675449745Z" level=info msg="CreateContainer within sandbox \"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"66bdedfb599070a68f0932978bf16b2b976311b7c894cf46372de1c03635d1e4\"" Apr 13 19:28:14.677267 containerd[1736]: time="2026-04-13T19:28:14.677157261Z" level=info msg="StartContainer for \"66bdedfb599070a68f0932978bf16b2b976311b7c894cf46372de1c03635d1e4\"" Apr 13 19:28:14.752481 systemd[1]: Started cri-containerd-66bdedfb599070a68f0932978bf16b2b976311b7c894cf46372de1c03635d1e4.scope - libcontainer container 66bdedfb599070a68f0932978bf16b2b976311b7c894cf46372de1c03635d1e4. Apr 13 19:28:14.802447 containerd[1736]: time="2026-04-13T19:28:14.802322821Z" level=info msg="StartContainer for \"66bdedfb599070a68f0932978bf16b2b976311b7c894cf46372de1c03635d1e4\" returns successfully" Apr 13 19:28:14.836895 kubelet[3245]: I0413 19:28:14.836836 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-p57hc" podStartSLOduration=24.583797509 podStartE2EDuration="29.836824533s" podCreationTimestamp="2026-04-13 19:27:45 +0000 UTC" firstStartedPulling="2026-04-13 19:28:09.327880683 +0000 UTC m=+42.857172480" lastFinishedPulling="2026-04-13 19:28:14.580907707 +0000 UTC m=+48.110199504" observedRunningTime="2026-04-13 19:28:14.830546629 +0000 UTC m=+48.359838426" watchObservedRunningTime="2026-04-13 19:28:14.836824533 +0000 UTC m=+48.366116330" Apr 13 19:28:15.127017 systemd-networkd[1363]: vxlan.calico: Link UP Apr 13 19:28:15.127024 systemd-networkd[1363]: vxlan.calico: Gained carrier Apr 13 19:28:16.736355 containerd[1736]: time="2026-04-13T19:28:16.735705364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:16.739658 containerd[1736]: time="2026-04-13T19:28:16.739611154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:28:16.744064 containerd[1736]: time="2026-04-13T19:28:16.743981262Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:16.751015 containerd[1736]: time="2026-04-13T19:28:16.750774325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:16.752180 containerd[1736]: time="2026-04-13T19:28:16.752135202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.170669217s" Apr 13 19:28:16.752180 containerd[1736]: time="2026-04-13T19:28:16.752173242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:28:16.755041 containerd[1736]: time="2026-04-13T19:28:16.755012554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:28:16.778430 containerd[1736]: time="2026-04-13T19:28:16.778388535Z" level=info msg="CreateContainer within sandbox \"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:28:16.840161 containerd[1736]: time="2026-04-13T19:28:16.840088497Z" level=info msg="CreateContainer within sandbox \"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866\"" Apr 13 19:28:16.840922 containerd[1736]: time="2026-04-13T19:28:16.840718495Z" level=info msg="StartContainer for \"74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866\"" Apr 13 19:28:16.871793 systemd[1]: Started cri-containerd-74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866.scope - libcontainer container 74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866. Apr 13 19:28:16.908418 containerd[1736]: time="2026-04-13T19:28:16.908373683Z" level=info msg="StartContainer for \"74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866\" returns successfully" Apr 13 19:28:17.027860 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Apr 13 19:28:17.851121 kubelet[3245]: I0413 19:28:17.849121 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f7466fddb-kzt5z" podStartSLOduration=21.444532364 podStartE2EDuration="28.849097973s" podCreationTimestamp="2026-04-13 19:27:49 +0000 UTC" firstStartedPulling="2026-04-13 19:28:09.349333628 +0000 UTC m=+42.878625425" lastFinishedPulling="2026-04-13 19:28:16.753899197 +0000 UTC m=+50.283191034" observedRunningTime="2026-04-13 19:28:17.848905894 +0000 UTC m=+51.378197651" watchObservedRunningTime="2026-04-13 19:28:17.849097973 +0000 UTC m=+51.378389770" Apr 13 19:28:18.590105 containerd[1736]: time="2026-04-13T19:28:18.589962301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:18.594115 containerd[1736]: time="2026-04-13T19:28:18.594067211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:28:18.599561 containerd[1736]: time="2026-04-13T19:28:18.599500117Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:18.606412 containerd[1736]: time="2026-04-13T19:28:18.606352620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:18.607496 containerd[1736]: time="2026-04-13T19:28:18.606976258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.851928344s" Apr 13 19:28:18.607496 containerd[1736]: time="2026-04-13T19:28:18.607011418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:28:18.616374 containerd[1736]: time="2026-04-13T19:28:18.616315555Z" level=info msg="CreateContainer within sandbox \"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:28:18.673960 containerd[1736]: time="2026-04-13T19:28:18.673904329Z" level=info msg="CreateContainer within sandbox \"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"80a52b9df66643c18087843ec16f197eb3ef47e755372d7e837d92fdf8010eac\"" Apr 13 19:28:18.674791 containerd[1736]: time="2026-04-13T19:28:18.674762087Z" level=info msg="StartContainer for \"80a52b9df66643c18087843ec16f197eb3ef47e755372d7e837d92fdf8010eac\"" Apr 13 19:28:18.711777 systemd[1]: Started cri-containerd-80a52b9df66643c18087843ec16f197eb3ef47e755372d7e837d92fdf8010eac.scope - libcontainer container 80a52b9df66643c18087843ec16f197eb3ef47e755372d7e837d92fdf8010eac. Apr 13 19:28:18.746604 containerd[1736]: time="2026-04-13T19:28:18.746561745Z" level=info msg="StartContainer for \"80a52b9df66643c18087843ec16f197eb3ef47e755372d7e837d92fdf8010eac\" returns successfully" Apr 13 19:28:18.749373 containerd[1736]: time="2026-04-13T19:28:18.749164499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:28:19.583889 containerd[1736]: time="2026-04-13T19:28:19.583832230Z" level=info msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.639 [INFO][5814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.639 [INFO][5814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" iface="eth0" netns="/var/run/netns/cni-4fdcbcf0-7979-8cf8-1e0e-18aa20b6683f" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.642 [INFO][5814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" iface="eth0" netns="/var/run/netns/cni-4fdcbcf0-7979-8cf8-1e0e-18aa20b6683f" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.643 [INFO][5814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" iface="eth0" netns="/var/run/netns/cni-4fdcbcf0-7979-8cf8-1e0e-18aa20b6683f" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.643 [INFO][5814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.643 [INFO][5814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.660 [INFO][5822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.660 [INFO][5822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.660 [INFO][5822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.668 [WARNING][5822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.668 [INFO][5822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.669 [INFO][5822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:19.672884 containerd[1736]: 2026-04-13 19:28:19.671 [INFO][5814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:19.673884 containerd[1736]: time="2026-04-13T19:28:19.673416723Z" level=info msg="TearDown network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" successfully" Apr 13 19:28:19.673884 containerd[1736]: time="2026-04-13T19:28:19.673446123Z" level=info msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" returns successfully" Apr 13 19:28:19.677337 systemd[1]: run-netns-cni\x2d4fdcbcf0\x2d7979\x2d8cf8\x2d1e0e\x2d18aa20b6683f.mount: Deactivated successfully. Apr 13 19:28:19.682047 containerd[1736]: time="2026-04-13T19:28:19.681717302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghjt9,Uid:20dfe0a1-3f3d-4996-8368-9e8b44bb53cd,Namespace:calico-system,Attempt:1,}" Apr 13 19:28:19.852495 systemd-networkd[1363]: cali262154fd498: Link UP Apr 13 19:28:19.852753 systemd-networkd[1363]: cali262154fd498: Gained carrier Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.765 [INFO][5828] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0 csi-node-driver- calico-system 20dfe0a1-3f3d-4996-8368-9e8b44bb53cd 1023 0 2026-04-13 19:27:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.7-a-e37b9c2d0c csi-node-driver-ghjt9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali262154fd498 [] [] }} ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.765 [INFO][5828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.790 [INFO][5840] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" HandleID="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.799 [INFO][5840] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" HandleID="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-e37b9c2d0c", "pod":"csi-node-driver-ghjt9", "timestamp":"2026-04-13 19:28:19.790176108 +0000 UTC"}, Hostname:"ci-4081.3.7-a-e37b9c2d0c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003751e0)} Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.799 [INFO][5840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.799 [INFO][5840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.800 [INFO][5840] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-e37b9c2d0c' Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.802 [INFO][5840] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.806 [INFO][5840] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.810 [INFO][5840] ipam/ipam.go 526: Trying affinity for 192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.811 [INFO][5840] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.814 [INFO][5840] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.816 [INFO][5840] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.818 [INFO][5840] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287 Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.826 [INFO][5840] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.837 [INFO][5840] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.136/26] block=192.168.26.128/26 handle="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.838 [INFO][5840] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.136/26] handle="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" host="ci-4081.3.7-a-e37b9c2d0c" Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.838 [INFO][5840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:19.873565 containerd[1736]: 2026-04-13 19:28:19.838 [INFO][5840] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.136/26] IPv6=[] ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" HandleID="k8s-pod-network.842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.840 [INFO][5828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"", Pod:"csi-node-driver-ghjt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali262154fd498", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.841 [INFO][5828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.136/32] ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.841 [INFO][5828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali262154fd498 ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.853 [INFO][5828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.853 [INFO][5828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287", Pod:"csi-node-driver-ghjt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali262154fd498", MAC:"82:02:db:ec:32:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:19.875406 containerd[1736]: 2026-04-13 19:28:19.871 [INFO][5828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287" Namespace="calico-system" Pod="csi-node-driver-ghjt9" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:19.917419 containerd[1736]: time="2026-04-13T19:28:19.917298627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:28:19.917419 containerd[1736]: time="2026-04-13T19:28:19.917495507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:28:19.918205 containerd[1736]: time="2026-04-13T19:28:19.918152825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:19.918560 containerd[1736]: time="2026-04-13T19:28:19.918473744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:28:19.940753 systemd[1]: Started cri-containerd-842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287.scope - libcontainer container 842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287. Apr 13 19:28:19.971631 containerd[1736]: time="2026-04-13T19:28:19.971264891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghjt9,Uid:20dfe0a1-3f3d-4996-8368-9e8b44bb53cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287\"" Apr 13 19:28:21.187713 systemd-networkd[1363]: cali262154fd498: Gained IPv6LL Apr 13 19:28:22.264255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928003534.mount: Deactivated successfully. Apr 13 19:28:22.360640 containerd[1736]: time="2026-04-13T19:28:22.359965735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:22.365688 containerd[1736]: time="2026-04-13T19:28:22.365643641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:28:22.371929 containerd[1736]: time="2026-04-13T19:28:22.371902425Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:22.379880 containerd[1736]: time="2026-04-13T19:28:22.379540725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:22.380343 containerd[1736]: time="2026-04-13T19:28:22.380315523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 3.631118824s" Apr 13 19:28:22.380435 containerd[1736]: time="2026-04-13T19:28:22.380420363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:28:22.382179 containerd[1736]: time="2026-04-13T19:28:22.382150999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:28:22.392383 containerd[1736]: time="2026-04-13T19:28:22.392255533Z" level=info msg="CreateContainer within sandbox \"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:28:22.456835 containerd[1736]: time="2026-04-13T19:28:22.456671131Z" level=info msg="CreateContainer within sandbox \"567c1367a1cc81417994f4286bbc4d7fd657b17bdd3b85f6a50736627f23e541\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"55c77f7ada3d716a53de66a5ab19b9dfdfabd7adcd544bb3ac9aeaeac6ced147\"" Apr 13 19:28:22.457546 containerd[1736]: time="2026-04-13T19:28:22.457357729Z" level=info msg="StartContainer for \"55c77f7ada3d716a53de66a5ab19b9dfdfabd7adcd544bb3ac9aeaeac6ced147\"" Apr 13 19:28:22.489766 systemd[1]: Started cri-containerd-55c77f7ada3d716a53de66a5ab19b9dfdfabd7adcd544bb3ac9aeaeac6ced147.scope - libcontainer container 55c77f7ada3d716a53de66a5ab19b9dfdfabd7adcd544bb3ac9aeaeac6ced147. Apr 13 19:28:22.526544 containerd[1736]: time="2026-04-13T19:28:22.526337394Z" level=info msg="StartContainer for \"55c77f7ada3d716a53de66a5ab19b9dfdfabd7adcd544bb3ac9aeaeac6ced147\" returns successfully" Apr 13 19:28:22.854284 kubelet[3245]: I0413 19:28:22.853723 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6ccc8ff85c-kzx6p" podStartSLOduration=2.140210661 podStartE2EDuration="14.853709887s" podCreationTimestamp="2026-04-13 19:28:08 +0000 UTC" firstStartedPulling="2026-04-13 19:28:09.668503293 +0000 UTC m=+43.197795090" lastFinishedPulling="2026-04-13 19:28:22.382002519 +0000 UTC m=+55.911294316" observedRunningTime="2026-04-13 19:28:22.853471848 +0000 UTC m=+56.382763645" watchObservedRunningTime="2026-04-13 19:28:22.853709887 +0000 UTC m=+56.383001684" Apr 13 19:28:23.862136 containerd[1736]: time="2026-04-13T19:28:23.862077099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.868919 containerd[1736]: time="2026-04-13T19:28:23.868714042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:28:23.875845 containerd[1736]: time="2026-04-13T19:28:23.875727025Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.884472 containerd[1736]: time="2026-04-13T19:28:23.884410643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:23.885315 containerd[1736]: time="2026-04-13T19:28:23.885193601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.503008642s" Apr 13 19:28:23.885315 containerd[1736]: time="2026-04-13T19:28:23.885224841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:28:23.895878 containerd[1736]: time="2026-04-13T19:28:23.895834974Z" level=info msg="CreateContainer within sandbox \"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:28:23.952573 containerd[1736]: time="2026-04-13T19:28:23.952531031Z" level=info msg="CreateContainer within sandbox \"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2913ecbe1be7fb06ac9a61389516690bec07b7b7623c0f4c84146bd756534b9e\"" Apr 13 19:28:23.953197 containerd[1736]: time="2026-04-13T19:28:23.953064629Z" level=info msg="StartContainer for \"2913ecbe1be7fb06ac9a61389516690bec07b7b7623c0f4c84146bd756534b9e\"" Apr 13 19:28:23.986803 systemd[1]: Started cri-containerd-2913ecbe1be7fb06ac9a61389516690bec07b7b7623c0f4c84146bd756534b9e.scope - libcontainer container 2913ecbe1be7fb06ac9a61389516690bec07b7b7623c0f4c84146bd756534b9e. Apr 13 19:28:24.021303 containerd[1736]: time="2026-04-13T19:28:24.021259057Z" level=info msg="StartContainer for \"2913ecbe1be7fb06ac9a61389516690bec07b7b7623c0f4c84146bd756534b9e\" returns successfully" Apr 13 19:28:24.023400 containerd[1736]: time="2026-04-13T19:28:24.023292772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:28:26.335345 containerd[1736]: time="2026-04-13T19:28:26.334583894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:26.349805 containerd[1736]: time="2026-04-13T19:28:26.349757372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:28:26.354155 containerd[1736]: time="2026-04-13T19:28:26.353926560Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:26.369870 containerd[1736]: time="2026-04-13T19:28:26.369581717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:28:26.370315 containerd[1736]: time="2026-04-13T19:28:26.370288915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.346927863s" Apr 13 19:28:26.370367 containerd[1736]: time="2026-04-13T19:28:26.370318395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:28:26.380818 containerd[1736]: time="2026-04-13T19:28:26.380775886Z" level=info msg="CreateContainer within sandbox \"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:28:26.431960 containerd[1736]: time="2026-04-13T19:28:26.431917504Z" level=info msg="CreateContainer within sandbox \"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6ccf30f2be1e096f51f370fb715670e52681bf89b54fe52071cc6f656a09e2ce\"" Apr 13 19:28:26.432712 containerd[1736]: time="2026-04-13T19:28:26.432680222Z" level=info msg="StartContainer for \"6ccf30f2be1e096f51f370fb715670e52681bf89b54fe52071cc6f656a09e2ce\"" Apr 13 19:28:26.464741 systemd[1]: Started cri-containerd-6ccf30f2be1e096f51f370fb715670e52681bf89b54fe52071cc6f656a09e2ce.scope - libcontainer container 6ccf30f2be1e096f51f370fb715670e52681bf89b54fe52071cc6f656a09e2ce. Apr 13 19:28:26.500850 containerd[1736]: time="2026-04-13T19:28:26.500806513Z" level=info msg="StartContainer for \"6ccf30f2be1e096f51f370fb715670e52681bf89b54fe52071cc6f656a09e2ce\" returns successfully" Apr 13 19:28:26.545560 containerd[1736]: time="2026-04-13T19:28:26.545523549Z" level=info msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.579 [WARNING][6083] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287", Pod:"csi-node-driver-ghjt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali262154fd498", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.579 [INFO][6083] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.579 [INFO][6083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" iface="eth0" netns="" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.579 [INFO][6083] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.579 [INFO][6083] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.600 [INFO][6090] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.600 [INFO][6090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.600 [INFO][6090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.609 [WARNING][6090] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.609 [INFO][6090] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.610 [INFO][6090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:26.614243 containerd[1736]: 2026-04-13 19:28:26.612 [INFO][6083] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.614243 containerd[1736]: time="2026-04-13T19:28:26.614159718Z" level=info msg="TearDown network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" successfully" Apr 13 19:28:26.614243 containerd[1736]: time="2026-04-13T19:28:26.614185438Z" level=info msg="StopPodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" returns successfully" Apr 13 19:28:26.615613 containerd[1736]: time="2026-04-13T19:28:26.615526954Z" level=info msg="RemovePodSandbox for \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" Apr 13 19:28:26.617479 containerd[1736]: time="2026-04-13T19:28:26.617442429Z" level=info msg="Forcibly stopping sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\"" Apr 13 19:28:26.687214 kubelet[3245]: I0413 19:28:26.687183 3245 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:28:26.695774 kubelet[3245]: I0413 19:28:26.695736 3245 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.657 [WARNING][6106] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20dfe0a1-3f3d-4996-8368-9e8b44bb53cd", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"842320f3d3210605fd74215dbb4c648a917f289f0ea4e10be847cfc535ede287", Pod:"csi-node-driver-ghjt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali262154fd498", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.658 [INFO][6106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.658 [INFO][6106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" iface="eth0" netns="" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.658 [INFO][6106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.658 [INFO][6106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.677 [INFO][6113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.678 [INFO][6113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.678 [INFO][6113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.689 [WARNING][6113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.689 [INFO][6113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" HandleID="k8s-pod-network.4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-csi--node--driver--ghjt9-eth0" Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.691 [INFO][6113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:26.698901 containerd[1736]: 2026-04-13 19:28:26.694 [INFO][6106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a" Apr 13 19:28:26.698901 containerd[1736]: time="2026-04-13T19:28:26.698759004Z" level=info msg="TearDown network for sandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" successfully" Apr 13 19:28:26.729107 containerd[1736]: time="2026-04-13T19:28:26.728390361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:26.729107 containerd[1736]: time="2026-04-13T19:28:26.728471961Z" level=info msg="RemovePodSandbox \"4b8e57654877f6f379cab1bccdb41698bd0790828c00042a85c239a749bbcd8a\" returns successfully" Apr 13 19:28:26.729107 containerd[1736]: time="2026-04-13T19:28:26.729032480Z" level=info msg="StopPodSandbox for \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\"" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.771 [WARNING][6127] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"d9eae39d-8433-4dd2-b30d-cd79e85825ad", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba", Pod:"calico-apiserver-774669dd44-6hkg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif4cb7babb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.771 [INFO][6127] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.771 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" iface="eth0" netns="" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.771 [INFO][6127] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.772 [INFO][6127] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.790 [INFO][6134] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.790 [INFO][6134] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.790 [INFO][6134] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.805 [WARNING][6134] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.806 [INFO][6134] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.808 [INFO][6134] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:26.814228 containerd[1736]: 2026-04-13 19:28:26.810 [INFO][6127] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.814228 containerd[1736]: time="2026-04-13T19:28:26.814203843Z" level=info msg="TearDown network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" successfully" Apr 13 19:28:26.814228 containerd[1736]: time="2026-04-13T19:28:26.814230403Z" level=info msg="StopPodSandbox for \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" returns successfully" Apr 13 19:28:26.816124 containerd[1736]: time="2026-04-13T19:28:26.815813119Z" level=info msg="RemovePodSandbox for \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\"" Apr 13 19:28:26.816124 containerd[1736]: time="2026-04-13T19:28:26.815845879Z" level=info msg="Forcibly stopping sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\"" Apr 13 19:28:26.874379 kubelet[3245]: I0413 19:28:26.871931 3245 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghjt9" podStartSLOduration=31.475250254 podStartE2EDuration="37.871915923s" podCreationTimestamp="2026-04-13 19:27:49 +0000 UTC" firstStartedPulling="2026-04-13 19:28:19.974885162 +0000 UTC m=+53.504176959" lastFinishedPulling="2026-04-13 19:28:26.371550831 +0000 UTC m=+59.900842628" observedRunningTime="2026-04-13 19:28:26.871060606 +0000 UTC m=+60.400352403" watchObservedRunningTime="2026-04-13 19:28:26.871915923 +0000 UTC m=+60.401207680" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.859 [WARNING][6150] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"d9eae39d-8433-4dd2-b30d-cd79e85825ad", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"0ff0c15b9a93a91aa66eef1d6f49390c73b042c60d5151a31488b13fc8ea8dba", Pod:"calico-apiserver-774669dd44-6hkg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif4cb7babb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.859 [INFO][6150] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.859 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" iface="eth0" netns="" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.859 [INFO][6150] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.860 [INFO][6150] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.887 [INFO][6157] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.888 [INFO][6157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.888 [INFO][6157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.903 [WARNING][6157] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.903 [INFO][6157] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" HandleID="k8s-pod-network.85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--6hkg4-eth0" Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.905 [INFO][6157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:26.908981 containerd[1736]: 2026-04-13 19:28:26.907 [INFO][6150] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c" Apr 13 19:28:26.909497 containerd[1736]: time="2026-04-13T19:28:26.909470019Z" level=info msg="TearDown network for sandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" successfully" Apr 13 19:28:26.919235 containerd[1736]: time="2026-04-13T19:28:26.919178192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:26.919440 containerd[1736]: time="2026-04-13T19:28:26.919422911Z" level=info msg="RemovePodSandbox \"85189e36f47e21247cef7fb56e015a90a7d9c31db29ab14ec0e505c9925a392c\" returns successfully" Apr 13 19:28:26.920037 containerd[1736]: time="2026-04-13T19:28:26.920010150Z" level=info msg="StopPodSandbox for \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\"" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.950 [WARNING][6171] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"12affd5b-ffcb-44a7-8551-5d3b9b496800", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8", Pod:"calico-apiserver-774669dd44-td4q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b8fc247ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.950 [INFO][6171] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.950 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" iface="eth0" netns="" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.950 [INFO][6171] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.950 [INFO][6171] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.969 [INFO][6178] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.969 [INFO][6178] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.969 [INFO][6178] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.987 [WARNING][6178] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.987 [INFO][6178] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.988 [INFO][6178] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:26.992171 containerd[1736]: 2026-04-13 19:28:26.990 [INFO][6171] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:26.992579 containerd[1736]: time="2026-04-13T19:28:26.992211550Z" level=info msg="TearDown network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" successfully" Apr 13 19:28:26.992579 containerd[1736]: time="2026-04-13T19:28:26.992237669Z" level=info msg="StopPodSandbox for \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" returns successfully" Apr 13 19:28:26.992739 containerd[1736]: time="2026-04-13T19:28:26.992708908Z" level=info msg="RemovePodSandbox for \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\"" Apr 13 19:28:26.992773 containerd[1736]: time="2026-04-13T19:28:26.992745748Z" level=info msg="Forcibly stopping sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\"" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.027 [WARNING][6192] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0", GenerateName:"calico-apiserver-774669dd44-", Namespace:"calico-system", SelfLink:"", UID:"12affd5b-ffcb-44a7-8551-5d3b9b496800", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774669dd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"4fcdd0ae686696b7c5990c49e9c7a5252f74ca611a58cabf3db3119c6627d6b8", Pod:"calico-apiserver-774669dd44-td4q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b8fc247ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.028 [INFO][6192] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.028 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" iface="eth0" netns="" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.028 [INFO][6192] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.028 [INFO][6192] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.049 [INFO][6199] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.049 [INFO][6199] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.049 [INFO][6199] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.060 [WARNING][6199] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.060 [INFO][6199] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" HandleID="k8s-pod-network.9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--apiserver--774669dd44--td4q9-eth0" Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.061 [INFO][6199] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.064297 containerd[1736]: 2026-04-13 19:28:27.062 [INFO][6192] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3" Apr 13 19:28:27.064761 containerd[1736]: time="2026-04-13T19:28:27.064346469Z" level=info msg="TearDown network for sandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" successfully" Apr 13 19:28:27.078917 containerd[1736]: time="2026-04-13T19:28:27.078851509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.079041 containerd[1736]: time="2026-04-13T19:28:27.078986829Z" level=info msg="RemovePodSandbox \"9eaae55b8d89a86136c25d7a4072bcab5ef7d7ec50bb96b72160d205a299c6e3\" returns successfully" Apr 13 19:28:27.079466 containerd[1736]: time="2026-04-13T19:28:27.079439068Z" level=info msg="StopPodSandbox for \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\"" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.112 [WARNING][6213] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.113 [INFO][6213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.113 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" iface="eth0" netns="" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.113 [INFO][6213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.113 [INFO][6213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.131 [INFO][6220] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.131 [INFO][6220] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.131 [INFO][6220] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.140 [WARNING][6220] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.140 [INFO][6220] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.141 [INFO][6220] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.144454 containerd[1736]: 2026-04-13 19:28:27.142 [INFO][6213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.146272 containerd[1736]: time="2026-04-13T19:28:27.144474687Z" level=info msg="TearDown network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" successfully" Apr 13 19:28:27.146272 containerd[1736]: time="2026-04-13T19:28:27.144504847Z" level=info msg="StopPodSandbox for \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" returns successfully" Apr 13 19:28:27.146272 containerd[1736]: time="2026-04-13T19:28:27.145659204Z" level=info msg="RemovePodSandbox for \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\"" Apr 13 19:28:27.146272 containerd[1736]: time="2026-04-13T19:28:27.145692844Z" level=info msg="Forcibly stopping sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\"" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.178 [WARNING][6234] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" WorkloadEndpoint="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.178 [INFO][6234] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.178 [INFO][6234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" iface="eth0" netns="" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.178 [INFO][6234] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.178 [INFO][6234] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.197 [INFO][6241] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.197 [INFO][6241] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.197 [INFO][6241] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.206 [WARNING][6241] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.206 [INFO][6241] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" HandleID="k8s-pod-network.afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-whisker--d898cc79b--fl9tk-eth0" Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.208 [INFO][6241] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.211776 containerd[1736]: 2026-04-13 19:28:27.210 [INFO][6234] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39" Apr 13 19:28:27.212518 containerd[1736]: time="2026-04-13T19:28:27.212147699Z" level=info msg="TearDown network for sandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" successfully" Apr 13 19:28:27.229428 containerd[1736]: time="2026-04-13T19:28:27.229183252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.229428 containerd[1736]: time="2026-04-13T19:28:27.229300372Z" level=info msg="RemovePodSandbox \"afac6aa76a6bdaf8166b088fb493e390b3da56e4965f3457c7c904fecfb0eb39\" returns successfully" Apr 13 19:28:27.230221 containerd[1736]: time="2026-04-13T19:28:27.229946330Z" level=info msg="StopPodSandbox for \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\"" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.263 [WARNING][6255] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0", GenerateName:"calico-kube-controllers-7f7466fddb-", Namespace:"calico-system", SelfLink:"", UID:"d41669ea-f411-4baa-9026-f6667bc038e5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7466fddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653", Pod:"calico-kube-controllers-7f7466fddb-kzt5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2caf85e152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.263 [INFO][6255] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.263 [INFO][6255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" iface="eth0" netns="" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.263 [INFO][6255] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.263 [INFO][6255] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.284 [INFO][6262] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.284 [INFO][6262] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.284 [INFO][6262] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.292 [WARNING][6262] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.292 [INFO][6262] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.294 [INFO][6262] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.297566 containerd[1736]: 2026-04-13 19:28:27.295 [INFO][6255] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.298327 containerd[1736]: time="2026-04-13T19:28:27.297619342Z" level=info msg="TearDown network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" successfully" Apr 13 19:28:27.298327 containerd[1736]: time="2026-04-13T19:28:27.297644742Z" level=info msg="StopPodSandbox for \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" returns successfully" Apr 13 19:28:27.298327 containerd[1736]: time="2026-04-13T19:28:27.298167021Z" level=info msg="RemovePodSandbox for \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\"" Apr 13 19:28:27.298327 containerd[1736]: time="2026-04-13T19:28:27.298197821Z" level=info msg="Forcibly stopping sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\"" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.332 [WARNING][6276] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0", GenerateName:"calico-kube-controllers-7f7466fddb-", Namespace:"calico-system", SelfLink:"", UID:"d41669ea-f411-4baa-9026-f6667bc038e5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7466fddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a71424824affdf2fd171b28f961af5270a7973a9f1d88708412ef2bef801f653", Pod:"calico-kube-controllers-7f7466fddb-kzt5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2caf85e152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.333 [INFO][6276] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.333 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" iface="eth0" netns="" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.333 [INFO][6276] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.333 [INFO][6276] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.351 [INFO][6283] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.351 [INFO][6283] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.351 [INFO][6283] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.359 [WARNING][6283] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.359 [INFO][6283] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" HandleID="k8s-pod-network.8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-calico--kube--controllers--7f7466fddb--kzt5z-eth0" Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.360 [INFO][6283] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.364567 containerd[1736]: 2026-04-13 19:28:27.362 [INFO][6276] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480" Apr 13 19:28:27.364567 containerd[1736]: time="2026-04-13T19:28:27.364381077Z" level=info msg="TearDown network for sandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" successfully" Apr 13 19:28:27.373158 containerd[1736]: time="2026-04-13T19:28:27.373122173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.373249 containerd[1736]: time="2026-04-13T19:28:27.373196773Z" level=info msg="RemovePodSandbox \"8c7f39f4d1129383c1e34e3f9adc83c123034fd02ccfd429ec172d16bff79480\" returns successfully" Apr 13 19:28:27.373914 containerd[1736]: time="2026-04-13T19:28:27.373652491Z" level=info msg="StopPodSandbox for \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\"" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.407 [WARNING][6297] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4730c9ea-c5c7-441b-b9a2-363112328d61", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81", Pod:"coredns-7d764666f9-8dlfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c56f97796", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.407 [INFO][6297] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.407 [INFO][6297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" iface="eth0" netns="" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.407 [INFO][6297] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.407 [INFO][6297] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.428 [INFO][6304] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.428 [INFO][6304] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.428 [INFO][6304] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.437 [WARNING][6304] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.437 [INFO][6304] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.438 [INFO][6304] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.441861 containerd[1736]: 2026-04-13 19:28:27.440 [INFO][6297] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.442467 containerd[1736]: time="2026-04-13T19:28:27.442334461Z" level=info msg="TearDown network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" successfully" Apr 13 19:28:27.442467 containerd[1736]: time="2026-04-13T19:28:27.442374181Z" level=info msg="StopPodSandbox for \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" returns successfully" Apr 13 19:28:27.443066 containerd[1736]: time="2026-04-13T19:28:27.442806060Z" level=info msg="RemovePodSandbox for \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\"" Apr 13 19:28:27.443066 containerd[1736]: time="2026-04-13T19:28:27.442837819Z" level=info msg="Forcibly stopping sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\"" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.473 [WARNING][6318] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4730c9ea-c5c7-441b-b9a2-363112328d61", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"74419fe764aa0ecdb18270a8f5a982d2dd16b8a48f01f2baa08fb5fa91cb7b81", Pod:"coredns-7d764666f9-8dlfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c56f97796", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.473 [INFO][6318] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.473 [INFO][6318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" iface="eth0" netns="" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.473 [INFO][6318] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.473 [INFO][6318] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.491 [INFO][6326] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.491 [INFO][6326] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.492 [INFO][6326] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.500 [WARNING][6326] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.500 [INFO][6326] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" HandleID="k8s-pod-network.17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--8dlfq-eth0" Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.501 [INFO][6326] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.505821 containerd[1736]: 2026-04-13 19:28:27.503 [INFO][6318] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494" Apr 13 19:28:27.507714 containerd[1736]: time="2026-04-13T19:28:27.506256923Z" level=info msg="TearDown network for sandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" successfully" Apr 13 19:28:27.517445 containerd[1736]: time="2026-04-13T19:28:27.517408413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.517655 containerd[1736]: time="2026-04-13T19:28:27.517635972Z" level=info msg="RemovePodSandbox \"17bf8c9c53afff5da8d3e9327dfdece85adeec1da524e1ab94d0b4ab0832a494\" returns successfully" Apr 13 19:28:27.518166 containerd[1736]: time="2026-04-13T19:28:27.518113171Z" level=info msg="StopPodSandbox for \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\"" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.548 [WARNING][6340] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e05ad226-2b3e-400d-9a42-c9477b96a890", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc", Pod:"coredns-7d764666f9-j6g9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice81d6e2e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.548 [INFO][6340] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.548 [INFO][6340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" iface="eth0" netns="" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.548 [INFO][6340] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.548 [INFO][6340] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.565 [INFO][6347] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.565 [INFO][6347] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.565 [INFO][6347] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.573 [WARNING][6347] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.573 [INFO][6347] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.575 [INFO][6347] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.577967 containerd[1736]: 2026-04-13 19:28:27.576 [INFO][6340] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.578481 containerd[1736]: time="2026-04-13T19:28:27.578441883Z" level=info msg="TearDown network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" successfully" Apr 13 19:28:27.578550 containerd[1736]: time="2026-04-13T19:28:27.578537803Z" level=info msg="StopPodSandbox for \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" returns successfully" Apr 13 19:28:27.579057 containerd[1736]: time="2026-04-13T19:28:27.579035282Z" level=info msg="RemovePodSandbox for \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\"" Apr 13 19:28:27.579180 containerd[1736]: time="2026-04-13T19:28:27.579162561Z" level=info msg="Forcibly stopping sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\"" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.616 [WARNING][6361] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e05ad226-2b3e-400d-9a42-c9477b96a890", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"491a0c60a0f7b429d6d7670e5d755d5f3b0bb5708deb256cd2a3acddf9b8bcfc", Pod:"coredns-7d764666f9-j6g9p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice81d6e2e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.617 [INFO][6361] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.617 [INFO][6361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" iface="eth0" netns="" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.617 [INFO][6361] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.617 [INFO][6361] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.639 [INFO][6369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.639 [INFO][6369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.639 [INFO][6369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.647 [WARNING][6369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.647 [INFO][6369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" HandleID="k8s-pod-network.bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-coredns--7d764666f9--j6g9p-eth0" Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.648 [INFO][6369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.651983 containerd[1736]: 2026-04-13 19:28:27.650 [INFO][6361] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156" Apr 13 19:28:27.652392 containerd[1736]: time="2026-04-13T19:28:27.652029399Z" level=info msg="TearDown network for sandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" successfully" Apr 13 19:28:27.661516 containerd[1736]: time="2026-04-13T19:28:27.661476813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.661614 containerd[1736]: time="2026-04-13T19:28:27.661548973Z" level=info msg="RemovePodSandbox \"bda1683a9661d4a5a20a96f5e667f916b386c277d86e183b7fd76751e1b7c156\" returns successfully" Apr 13 19:28:27.662245 containerd[1736]: time="2026-04-13T19:28:27.661987731Z" level=info msg="StopPodSandbox for \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\"" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.695 [WARNING][6384] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"67679312-cc73-499f-abbf-6475bd30ecd4", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf", Pod:"goldmane-9f7667bb8-p57hc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedf3d30b689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.695 [INFO][6384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.695 [INFO][6384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" iface="eth0" netns="" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.695 [INFO][6384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.695 [INFO][6384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.715 [INFO][6391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.715 [INFO][6391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.715 [INFO][6391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.723 [WARNING][6391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.723 [INFO][6391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.724 [INFO][6391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.728696 containerd[1736]: 2026-04-13 19:28:27.726 [INFO][6384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.729855 containerd[1736]: time="2026-04-13T19:28:27.729395304Z" level=info msg="TearDown network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" successfully" Apr 13 19:28:27.729855 containerd[1736]: time="2026-04-13T19:28:27.729427784Z" level=info msg="StopPodSandbox for \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" returns successfully" Apr 13 19:28:27.730925 containerd[1736]: time="2026-04-13T19:28:27.730645581Z" level=info msg="RemovePodSandbox for \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\"" Apr 13 19:28:27.730925 containerd[1736]: time="2026-04-13T19:28:27.730674821Z" level=info msg="Forcibly stopping sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\"" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.760 [WARNING][6405] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"67679312-cc73-499f-abbf-6475bd30ecd4", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-e37b9c2d0c", ContainerID:"a3f11a1fc19d38ebcbe2dd63f3be5ecaf6ee70219eadc8a8ee4b95ca2f88daaf", Pod:"goldmane-9f7667bb8-p57hc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedf3d30b689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.761 [INFO][6405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.761 [INFO][6405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" iface="eth0" netns="" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.761 [INFO][6405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.761 [INFO][6405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.779 [INFO][6412] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.779 [INFO][6412] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.779 [INFO][6412] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.788 [WARNING][6412] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.788 [INFO][6412] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" HandleID="k8s-pod-network.ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Workload="ci--4081.3.7--a--e37b9c2d0c-k8s-goldmane--9f7667bb8--p57hc-eth0" Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.789 [INFO][6412] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:28:27.793078 containerd[1736]: 2026-04-13 19:28:27.791 [INFO][6405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192" Apr 13 19:28:27.793516 containerd[1736]: time="2026-04-13T19:28:27.793137288Z" level=info msg="TearDown network for sandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" successfully" Apr 13 19:28:27.807761 containerd[1736]: time="2026-04-13T19:28:27.807711207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:28:27.807884 containerd[1736]: time="2026-04-13T19:28:27.807804287Z" level=info msg="RemovePodSandbox \"ec109c4075b74517b2532496c691cfd286f9687650c01ed5b07cc92ab8a3d192\" returns successfully" Apr 13 19:28:41.270747 kubelet[3245]: I0413 19:28:41.270367 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:28:47.841287 systemd[1]: run-containerd-runc-k8s.io-74c0059bf7c6accd3f6147345dcdcffd357a06b6f4dcfa0e08108593503e3866-runc.qtk631.mount: Deactivated successfully. Apr 13 19:29:00.424824 kubelet[3245]: I0413 19:29:00.424542 3245 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:29:18.600868 systemd[1]: Started sshd@7-10.0.0.31:22-20.229.252.112:44150.service - OpenSSH per-connection server daemon (20.229.252.112:44150). Apr 13 19:29:19.488898 sshd[6646]: Accepted publickey for core from 20.229.252.112 port 44150 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:19.491677 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:19.495733 systemd-logind[1714]: New session 10 of user core. Apr 13 19:29:19.501732 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:29:20.173310 sshd[6646]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:20.175993 systemd-logind[1714]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:29:20.176260 systemd[1]: sshd@7-10.0.0.31:22-20.229.252.112:44150.service: Deactivated successfully. Apr 13 19:29:20.178377 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:29:20.181249 systemd-logind[1714]: Removed session 10. Apr 13 19:29:25.334987 systemd[1]: Started sshd@8-10.0.0.31:22-20.229.252.112:54598.service - OpenSSH per-connection server daemon (20.229.252.112:54598). Apr 13 19:29:26.254519 sshd[6659]: Accepted publickey for core from 20.229.252.112 port 54598 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:26.255389 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:26.260388 systemd-logind[1714]: New session 11 of user core. Apr 13 19:29:26.262732 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:29:26.955805 sshd[6659]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:26.959153 systemd[1]: sshd@8-10.0.0.31:22-20.229.252.112:54598.service: Deactivated successfully. Apr 13 19:29:26.963263 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:29:26.965649 systemd-logind[1714]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:29:26.967287 systemd-logind[1714]: Removed session 11. Apr 13 19:29:32.103953 systemd[1]: Started sshd@9-10.0.0.31:22-20.229.252.112:54612.service - OpenSSH per-connection server daemon (20.229.252.112:54612). Apr 13 19:29:32.952316 sshd[6695]: Accepted publickey for core from 20.229.252.112 port 54612 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:32.985252 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:32.989374 systemd-logind[1714]: New session 12 of user core. Apr 13 19:29:32.997965 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:29:33.649036 sshd[6695]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:33.652523 systemd[1]: sshd@9-10.0.0.31:22-20.229.252.112:54612.service: Deactivated successfully. Apr 13 19:29:33.654835 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:29:33.657449 systemd-logind[1714]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:29:33.658569 systemd-logind[1714]: Removed session 12. Apr 13 19:29:38.805141 systemd[1]: Started sshd@10-10.0.0.31:22-20.229.252.112:53038.service - OpenSSH per-connection server daemon (20.229.252.112:53038). Apr 13 19:29:39.703879 sshd[6768]: Accepted publickey for core from 20.229.252.112 port 53038 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:39.706300 sshd[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:39.710518 systemd-logind[1714]: New session 13 of user core. Apr 13 19:29:39.720754 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:29:40.382818 sshd[6768]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:40.386285 systemd-logind[1714]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:29:40.386546 systemd[1]: sshd@10-10.0.0.31:22-20.229.252.112:53038.service: Deactivated successfully. Apr 13 19:29:40.388238 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:29:40.391427 systemd-logind[1714]: Removed session 13. Apr 13 19:29:40.538844 systemd[1]: Started sshd@11-10.0.0.31:22-20.229.252.112:53042.service - OpenSSH per-connection server daemon (20.229.252.112:53042). Apr 13 19:29:41.417639 sshd[6793]: Accepted publickey for core from 20.229.252.112 port 53042 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:41.418833 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:41.422582 systemd-logind[1714]: New session 14 of user core. Apr 13 19:29:41.427856 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:29:42.138461 sshd[6793]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:42.142256 systemd[1]: sshd@11-10.0.0.31:22-20.229.252.112:53042.service: Deactivated successfully. Apr 13 19:29:42.144297 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:29:42.145516 systemd-logind[1714]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:29:42.146436 systemd-logind[1714]: Removed session 14. Apr 13 19:29:42.303365 systemd[1]: Started sshd@12-10.0.0.31:22-20.229.252.112:53050.service - OpenSSH per-connection server daemon (20.229.252.112:53050). Apr 13 19:29:43.189833 sshd[6804]: Accepted publickey for core from 20.229.252.112 port 53050 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:43.191527 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:43.195651 systemd-logind[1714]: New session 15 of user core. Apr 13 19:29:43.201762 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:29:43.874817 sshd[6804]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:43.878726 systemd[1]: sshd@12-10.0.0.31:22-20.229.252.112:53050.service: Deactivated successfully. Apr 13 19:29:43.880740 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:29:43.882161 systemd-logind[1714]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:29:43.883052 systemd-logind[1714]: Removed session 15. Apr 13 19:29:49.035668 systemd[1]: Started sshd@13-10.0.0.31:22-20.229.252.112:35260.service - OpenSSH per-connection server daemon (20.229.252.112:35260). Apr 13 19:29:49.956430 sshd[6898]: Accepted publickey for core from 20.229.252.112 port 35260 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:49.958699 sshd[6898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:49.962814 systemd-logind[1714]: New session 16 of user core. Apr 13 19:29:49.968720 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:29:50.665819 sshd[6898]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:50.669535 systemd[1]: sshd@13-10.0.0.31:22-20.229.252.112:35260.service: Deactivated successfully. Apr 13 19:29:50.671972 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:29:50.673226 systemd-logind[1714]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:29:50.674265 systemd-logind[1714]: Removed session 16. Apr 13 19:29:50.824942 systemd[1]: Started sshd@14-10.0.0.31:22-20.229.252.112:35264.service - OpenSSH per-connection server daemon (20.229.252.112:35264). Apr 13 19:29:51.746878 sshd[6911]: Accepted publickey for core from 20.229.252.112 port 35264 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:51.748307 sshd[6911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:51.752424 systemd-logind[1714]: New session 17 of user core. Apr 13 19:29:51.757738 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:29:52.582667 sshd[6911]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:52.588822 systemd-logind[1714]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:29:52.589479 systemd[1]: sshd@14-10.0.0.31:22-20.229.252.112:35264.service: Deactivated successfully. Apr 13 19:29:52.591270 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:29:52.592120 systemd-logind[1714]: Removed session 17. Apr 13 19:29:52.748819 systemd[1]: Started sshd@15-10.0.0.31:22-20.229.252.112:35266.service - OpenSSH per-connection server daemon (20.229.252.112:35266). Apr 13 19:29:53.661625 sshd[6922]: Accepted publickey for core from 20.229.252.112 port 35266 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:53.662471 sshd[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:53.667173 systemd-logind[1714]: New session 18 of user core. Apr 13 19:29:53.671752 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:29:54.966445 sshd[6922]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:54.970460 systemd[1]: sshd@15-10.0.0.31:22-20.229.252.112:35266.service: Deactivated successfully. Apr 13 19:29:54.972676 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:29:54.975014 systemd-logind[1714]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:29:54.976007 systemd-logind[1714]: Removed session 18. Apr 13 19:29:55.124341 systemd[1]: Started sshd@16-10.0.0.31:22-20.229.252.112:41992.service - OpenSSH per-connection server daemon (20.229.252.112:41992). Apr 13 19:29:56.035538 sshd[6964]: Accepted publickey for core from 20.229.252.112 port 41992 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:56.037010 sshd[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:56.043144 systemd-logind[1714]: New session 19 of user core. Apr 13 19:29:56.047742 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:29:56.858104 sshd[6964]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:56.862741 systemd-logind[1714]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:29:56.863118 systemd[1]: sshd@16-10.0.0.31:22-20.229.252.112:41992.service: Deactivated successfully. Apr 13 19:29:56.865196 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:29:56.866227 systemd-logind[1714]: Removed session 19. Apr 13 19:29:57.010760 systemd[1]: Started sshd@17-10.0.0.31:22-20.229.252.112:41994.service - OpenSSH per-connection server daemon (20.229.252.112:41994). Apr 13 19:29:57.892623 sshd[6977]: Accepted publickey for core from 20.229.252.112 port 41994 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:29:57.896277 sshd[6977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:29:57.900937 systemd-logind[1714]: New session 20 of user core. Apr 13 19:29:57.906773 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:29:58.560367 sshd[6977]: pam_unix(sshd:session): session closed for user core Apr 13 19:29:58.564098 systemd[1]: sshd@17-10.0.0.31:22-20.229.252.112:41994.service: Deactivated successfully. Apr 13 19:29:58.566407 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:29:58.567495 systemd-logind[1714]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:29:58.568390 systemd-logind[1714]: Removed session 20. Apr 13 19:30:03.706717 systemd[1]: Started sshd@18-10.0.0.31:22-20.229.252.112:42004.service - OpenSSH per-connection server daemon (20.229.252.112:42004). Apr 13 19:30:04.591118 sshd[6994]: Accepted publickey for core from 20.229.252.112 port 42004 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:30:04.592152 sshd[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:04.597961 systemd-logind[1714]: New session 21 of user core. Apr 13 19:30:04.604933 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:30:05.263563 sshd[6994]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:05.268046 systemd[1]: sshd@18-10.0.0.31:22-20.229.252.112:42004.service: Deactivated successfully. Apr 13 19:30:05.271380 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:30:05.273320 systemd-logind[1714]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:30:05.274225 systemd-logind[1714]: Removed session 21. Apr 13 19:30:10.438191 systemd[1]: Started sshd@19-10.0.0.31:22-20.229.252.112:57864.service - OpenSSH per-connection server daemon (20.229.252.112:57864). Apr 13 19:30:11.352621 sshd[7028]: Accepted publickey for core from 20.229.252.112 port 57864 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:30:11.353470 sshd[7028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:11.357724 systemd-logind[1714]: New session 22 of user core. Apr 13 19:30:11.366710 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:30:12.045004 sshd[7028]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:12.048881 systemd[1]: sshd@19-10.0.0.31:22-20.229.252.112:57864.service: Deactivated successfully. Apr 13 19:30:12.052112 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:30:12.053092 systemd-logind[1714]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:30:12.054261 systemd-logind[1714]: Removed session 22. Apr 13 19:30:17.201853 systemd[1]: Started sshd@20-10.0.0.31:22-20.229.252.112:54068.service - OpenSSH per-connection server daemon (20.229.252.112:54068). Apr 13 19:30:18.080268 sshd[7062]: Accepted publickey for core from 20.229.252.112 port 54068 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:30:18.105565 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:18.110108 systemd-logind[1714]: New session 23 of user core. Apr 13 19:30:18.117822 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:30:18.754829 sshd[7062]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:18.758721 systemd-logind[1714]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:30:18.759466 systemd[1]: sshd@20-10.0.0.31:22-20.229.252.112:54068.service: Deactivated successfully. Apr 13 19:30:18.761662 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:30:18.763353 systemd-logind[1714]: Removed session 23. Apr 13 19:30:23.904758 systemd[1]: Started sshd@21-10.0.0.31:22-20.229.252.112:54070.service - OpenSSH per-connection server daemon (20.229.252.112:54070). Apr 13 19:30:24.784438 sshd[7094]: Accepted publickey for core from 20.229.252.112 port 54070 ssh2: RSA SHA256:YsSfv+8GzT0Jpy7FFwLHMe0c9D4nOsQGDEDKFTGK22c Apr 13 19:30:24.785458 sshd[7094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:30:24.790660 systemd-logind[1714]: New session 24 of user core. Apr 13 19:30:24.795781 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:30:25.455021 sshd[7094]: pam_unix(sshd:session): session closed for user core Apr 13 19:30:25.458438 systemd[1]: sshd@21-10.0.0.31:22-20.229.252.112:54070.service: Deactivated successfully. Apr 13 19:30:25.460296 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:30:25.461179 systemd-logind[1714]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:30:25.462384 systemd-logind[1714]: Removed session 24.